XNSIO
  About   Slides   Home  

 
Managed Chaos
Naresh Jain's Random Thoughts on Software Development and Adventure Sports
     
`
 
RSS Feed
Recent Thoughts
Tags
Recent Comments

Stay away from Design Patterns; master Value, Principles and Basics Skills first

Thursday, June 7th, 2012

Unfortunately many developers think they already know enough values and principles, so its not worth spending anytime learning or practicing them. Instead they want to learn the holy grail of software design – “the design patterns.”

Values Principles and Patterns

While the developers might think, design patterns is the ultimate nirvana of designing, I would say, they really need to learn how to think in their paradigm plus internalize the values and principles first. I would suggest developers should focus on the basics for at least 6 months before going near patterns. Else they would gain a very superficial knowledge about the patterns, and won’t be able to appreciate its true value or applicability.

Any time I get a request for teaching OO design patterns, I insist that we start the workshop with a recap of the values and principles. Time and again, I find developers in the class agreeing that they are having a hard time thinking in Objects. Most developers are still very much programming in the procedural way. Trying to retrofit objects, but not really thinking in-terms of objects and responsibilities.

Recently a student in my class shared that:

I thought we were doing Object Oriented Programming, but clearly its a paradigm shift that we still need to make.

On the first day, I start the class with simple labs, in the first 4-5 labs, you can see the developers struggle to come up with a simple object-oriented solution for basic design problems. They end up with a very procedural solution:

  • main class with a bunch of static methods or
  • data holder classes with public accessors and other or/er classes pulling that data to do some logic.
  • very heavy use of if-else or switch

Its sad that teams don’t understand nor give importance to the following important OO concepts:

  • Data and Logic together – Encapsulation (Everyone knows the definition of encapsulation, but how to put it in proper use, is always a big question mark.) Many developers write public Getters/Setters or Accessors by default on their classes. And are shocked to realize that it breaks encapsulation.
  • Inheritance is the last option: It is quite common to see solutions where slight variation in data is modeled as a hierarchy of classes. The child classes have no behavior in them and often leads to a class explosion.
  • Composition over Inheritance – In various labs, developers go down the route of using Inheritance to reuse behavior instead of thinking of a composition based design. Sadly, inheritance based solutions have various flaws that the team can’t realize, until highlighted. Coming up with a good inheritance based design, when the parent is mutable, it extremely tricky.
  • Focus on smart data-structure: The developers have a tough time coming up with smart data-structure and putting logic around it. Via various labs, I try to demonstrate how designing smart data-structures can make their code extremely simple and minimalistic.

I’ve often noticed that, when you give developers a problem, they start off by drawing a flow chart, data-flow diagram or some other diagram which naturally lends them into a procedural design.

Thinking in terms of Objects requires thinking of objects and responsibilities, not so much in terms of flow. Its extremely important to understand the problem, distill it down to the crux by eliminating noise and then building a solution in an incremental fashion. Many developers have a misconception that a good designs has to be fully flushed out before you start. I’ve usually found that a good design emerges in an incremental fashion.

Even though many developers know the definition of high-cohesion, low-coupling and conceptual integrity, when asked to give a software or non-software example, they have a hard time. It goes to show that they don’t really understand the concept in action.

Developers might have read the bookish definition of the various Design Principles. But when asked to find out what design principles were violated in a sample design, they are not able to articulate. Also often you find a lot of misconception about the various principles. For example, Single Responsibility, few developers say that each method should do only one thing and a class should only have one responsibility. What does that actually mean? It turns out that SRP has to do more with temporal symmetry and change. Grouping behavior together from a change point of view.

Even though most developers raise their hands when asked if they know code smells, they have a tough time identifying them or avoiding them in their design. Developers need a lot of hands-on practice to recognize and avoid various code smells. Once you learn to recognize code smells, the next step is to learn how to effectively refactor away from them.

Often I find developers have the most expensive and jazzy tools & IDEs, but when you watch them code, they use their IDEs just as a text-editor. No automated refactoring. Most developers type “Public class xxx” instead of writing the caller code first and then using the IDE to generate the required skeleton code for them. Use of keyboard shortcuts is as rare as seeing solar eclipse. Pretty much most developers practice what I call mouse driven programming. In my experience, better use of IDE and Refactoring tools can give developers at least 5x productivity boost.

I hardly remember meeting a developer who said they don’t know how to unit test. Yet time and again, most developers in my class struggle to write good unit tests. Due to lack of familiarity or lack of practice or stupid misconceptions, most developers skip writing any automated unit tests. Instead they use Sysout/Console.out or other debugging mechanism to validate their logic. Getting better at their unit testing skills and then gradually TDD can really give them a productivity boost and improve their code quality.

I would be extremely happy if every development shop, invested 1 hour each week to organize a refactoring fest, design fest or a coding dojo for their developers to practice and hone their design skills. One can attend as many trainings as they want, but unless they deliberately practice and apply these techniques on job, it will not help.

The I Naming Convention

Saturday, June 13th, 2009

I don’t like the I<something> naming convention for interfaces for various reasons.

Let me explain with an example. Lets say we have IAccount interface. Why is this bad?

  • All I care is, I’m talking to an Account, it really does not matter whether its an interface, abstract class or a concrete class. So the I in the name is noise for me.
  • It might turn out that, I might only ever have one type of Account. Why do I need to create an interface then? Its the speculative generality code smell and violating the YAGNI principle. If someday I’ll need multiple types of Accounts, I’ll create it then. Its probably going to be as much effort to create it then v/s doing it now. Minus all the maintenance overhead.
  • Let’s say we’ve multiple types of Accounts. Instead of calling it IAccount and the child classes as AccountImpl1 or SavingAccountImpl, I would rather refer to it as Account and the different types of account as SavingAccount or CreditAccount. It might also turn out that, there is common behavior between the two types of Account. At that point instead of having IAccount and creating another Abstract class called AbstractAccount I would just change the Account interface to be an abstract class. I don’t want to lock myself into an Interface.

Personally I think the whole I<something> is a hang-over from C++ and its precursors.

Some people also argue that using the I<something> convention is easy for them to know whether its an interface or not (esp. coz they don’t want to spend time typing new Account() and then realizing that its an interface).

The way I look at it is good IDEs will show the appropriate icon and that can help you avoid this issue to some extent. But even if it did not, to me its not a big deal. The number of times the code is read is far more than its written. Maintaining one extra poorly named Interface is far more expensive than the minuscule time saved in typing.

P.S: I’m certainly not discouraging people from creating interfaces. What I don’t like is having only one class inheriting from an interface. May be if you are exposing an API, you might still be able to convenience me. But in most cases people use this convention through out their code base and not just at the boundaries of their system.

In lot of cases I find myself starting off with an Interface, because when I’m developing some class, I don’t want to start building its collaborator. But then when I start TDDing that class, I’ll change the interface to a concrete class. Later my tests might drive different types of subclass and I might introduce an Interface or an Abstract class as suitable. And may be sometime in the future I might break the hierarchy and use composition instead. Guess what, we are dealing with evolutionary design.

    Licensed under
Creative Commons License