Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Building Apps the Wrong Way Can Be the Right Way (medium.com/google-developers)
91 points by jtwebman on Feb 24, 2016 | hide | past | favorite | 43 comments


It may seem like it, but no engineer writes "right" code on the first try. They get it working, then they iterate, cleaning it up to make it "right".

Experience teaches us where to start, where to think ahead a little, and how to structure code such that it can be refactored in the future.

Writing code the "right way" means 1) getting something working 2)taking what you've learned and refactoring and/or.

If you care about maintaining a chunk of code for 6 months, or a year, you'll quickly learn to refactor. If you are prototyping, right likely isn't a priority to begin with.


The three priorities of software development:

1) Make it work (at all)

2) Make it work right

3) (If necessary) make it work fast


I heard a similar quote from Nathan Marz of Storm fame: "First make it possible, then make it beautiful, then make it fast" http://nathanmarz.com/blog/suffering-oriented-programming.ht...


The three priorities of software companies:

1) Make it seem to work

2) Ship it

3) Profit ?


Being self-taught left me with the [false] impression that the reason I don't write good code the first time around is because I never formally studied CS.


I guess that depends on whether you are developing candy crush or a aircraft guidance system. Everything starts with prototyping, but only on low-risk software will engineering procedure be discarded for the sake of 'failing quickly' or being first to market.


If you build it they might come. If you don't build it they can't come. Therefore you must build it for them to come. Therefore the most important thing you do to make them come, is to build it by any means necessary.

Them of course means customers.


Thanks, just what I needed to push me to build my side project. I think I need this on a shirt.


I agree released code over quality at first makes far more sense. Once you have the demand you can refactor to make it easier.


Most of the code I write at work is just automating processes and I usually do it piecemeal until managing it all gets too unwieldy then I refactor and simplify. I'm not exactly building one to throw away, but I'm using a lot of essentially placeholder code that I know I'm going to replace with a more coherent system once I see the pieces working together. Sometimes the quick and dirty stuff is good enough and I never need to touch it again, though.


I've found it's a lot of the time that the quick and dirty is the stuff I never come back to.


In some companies, it's the stuff you're not allowed to come back to because 'it works'.


Working code that has been battlefield tested and ugly as anything is still more valuable than pretty but unused code in many scenarios.


Brooks said it best: "Build one to throw away"


I don't think I was ever allowed to throw away a prototype. They always end up being the production version in the end. I make an effort to make reasonably good prototypes for that reason.


I might not toss it away all at once, but by the time I'm done with v2, little of the original code remains.


> Twenty minutes of debugging later, it turns out I’d forgotten to call .show() after creating a Toast.

It amazes me that people still think that strong typing isn't useful.


I don't think it would be much help in that particular situation.


Yeah, what kind of strong typing is going to keep you from NOT calling a function?


The research into this is usually called "effect systems", I think. You can use it to prove that ugly old APIs like OpenGL are being used properly in your code.

As far as the real world goes, all you get is -Wunused.


Adding annotation @CheckResult on android.widget.Toast#makeText(android.content.Context, java.lang.CharSequence, int)


This is Java, it is strongly typed. You need to do:

Toast toast = Toast.makeText(context, text, duration); toast.show();


I read that as "it amazes me that people aren't all using Haskell yet".

Because 'show' is probably all done for side effects. And then, maybe, Haskell would have caught that, due to the wrong type signature. If called from an otherwise pure function.

Most other "strong typed" languages wouldn't catch that. But that's hardly something type systems solve on their own.


Even Java would have caught that assuming .show() is Java's .toString().


Nope. Creating a Toast object just creates the object. Calling .show() on the newly created object is what makes it actually draw something on the screen. The difficulty here is to realize that "new Toast(myMessage)" is not enough, you need to execute "new Toast(myMessage).show()".

By the way, I am 100% sure that Java wouldn't have caught that, because this IS Java.


It is just a library that is constructed in a way that is not obvious to the programmer who uses it.

You can build that sort of library in any language.


This has nothing to do with either the typing or the language, and everything to do with the API.


I'm not sure how one would encode that information in java. Isn't toString() a special case in java? If you try to cast any object to a String, something (String?) will look for the toString class. I'm not sure what the equivalent would be for a widget that can be perfectly valid to add to another widget both in a hidden, and show()n state?

To compare with toString, it's more like if you didn't do obj.show(), obj.toString() would return the empty string?


> It amazes me that people still think that strong typing isn't useful.

I've heard people argue that its not always a net benefit, but I've never heard anyone argue that it doesn't have benefits.


I have never used a strongly typed language. But it seems like it would be almost strictly superior to the php and ruby I use now. So why on earth isn't everything strongly typed? What's the downside?


After going back and forth between weak and strong (static) typing (php, js, java and scala) i would say weak typing lets you code quicker because you can be less explicit about what you're doing, but strong static typing lets you make code production-grade sooner because of all the additional checks which catch bugs. Weak typing feels more productive while coding, but when all is said and done often ends up being less so.

In php you can do strict typing at the function call boundary, which is where it really matters, but you need the discipline to do it consistently. I like scala's approach of inferring types as much as possible and only stopping you when you're trying to do something semantically impossible. Scala often feels like a weakly typed system while coding, which is why it's so fun. For that matter, functional style java 8 feels the same way some of the time.

I think the ideal system is strongly typed but maximally inferred to minimize verbosity and maximize coding productivity. Scala is too weird to be that language though.


Not many downsides, if implemented right. Problem is, most languages have very simplistic type systems, which get in your way, specially if you are trying to code a function that will operate with many different possible "types".

Then they add kludges to make life bearable (see also: Generics), that will then add complexity. And are sometimes horribly implemented (see: Java's type erasure), or C++ compiler authors struggling with its template system (and users struggling to this day).

Or sometimes they don't even add the kludges, making life tedious (see: Go, and endless discussions about adding generics to it)

The other category of hacks is what's usually called "Reflection", or "Introspection". Any time a language designer feels necessary to add such a name for the ability to lookup things, then you know you are in a world of pain. The contortions required to by some languages to implement Ruby's 'respond_to?' method can get ridiculous. So much so, that the usual workaround is to add 'interfaces', which are a type(that does nothing useful), and so you can check the type. Or pass it around to something that will expect that over-arching type.

Sometimes you just don't care about a thing's type. And, many times, the types that you should care about are not the ones that are actually specified. If I'm doing something important, I don't want the compiler to check if the type is integer, I want it to check if the type is, say, Miles, as opposed to Kilometers. That's what's important, or my costly space probe will literally crash.

A language should make it easy to do so, otherwise people won't bother. Or probes will crash. Even though the program compiled. And the UML diagrams are spotless, with the factories all building the object types they were supposed to.


C/Java-type strong languages have the downside that they're super redundant. Typing "MyObject myObject = new MyObject();" is not fun in your free time. The same goes for any other required proof in your program, when you already know you're right.

Hey, remember this guy? He used to be pretty famous, you know.

https://sites.google.com/site/steveyegge2/is-weak-typing-str...

http://steve-yegge.blogspot.com/2008/05/dynamic-languages-st...


Kotlin on the JVM is a nice improvement over Java, still statically typed, the compiler infers the type for obj.

  var obj = MyObject()
  ...
  obj.foo()
So you could change the obj instantiation to a different type and as long as the new type has foo() too then you're good to go.

Also you can have your typed properties stored in a map, which you could also add other values to dynamically at runtime.

from https://kotlinlang.org/docs/reference/delegated-properties.h...

  class MutableUser(val map: MutableMap<String, Any?>) {
    var name: String by map
    var age: Int     by map
  }
I haven't used Scala but its (Duck)Structual Typing seems like it would get closer again to a dynamic typing feel with static types. Java is pretty verbose and limited in ways, and many devs having backgrounds in EJBs and Struts doesn't help. It would be interesting to see the same side-by-side project comparison in those links with a team using Scala well. Anyway thats all coming from my bias of having done 12 years of Java and 1 year of full Javascript with Angular.


Stevey would criticize languages left and right, though. I wonder if he is coding in dynamic languages at Google nowadays... Most likely he is writing Go while complaining about it.

(miss you stevey please resume writing)


Neither of these are even the article I was thinking of. There was one with the same method written in Java and Perl, doing the same thing, and both obviously correct. But the Perl one was much much shorter because you didn't have to type any types.


Isn't Ruby strongly typed (like Python)? Or do you mean statically typed?


It is.


Good point. I did mean statically.


Speaking as someone who often looks at PHP and wishes it were Java, I'll try some devil-advocacy...

Ttatic-typed stuff requires more decision-making effort from the user as they write code. (In exchange for lowering the effort to read/analyze it later.) Naive decisions can impact how other people use your code, requiring them to "fix it up". (Fortunately, refactoring tools can help.)

Another downside is when the type system the language provides only partially addresses your vision. Then it gets a little tricky to decide how to optimize your baby-to-bathwater ratio.


> What's the downside?

If you're used to dynamic imperative languages, you have to learn a lot of new concepts, ways of thinking and new kinds of errors messages. Also, it's hard for ecosystems for strongly typed languages to compete because of current popularity.


It's kind of a headache to get started if you just want something quick and dirty.


I think I've heard this argued before. They say if it acts like a duck and walks like a duck, then they say it promotes unit tests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: