Hungarian Notation - Disadvantages

Disadvantages

Most arguments against Hungarian notation are against Systems Hungarian notation, not Apps Hungarian notation. Some potential issues are:

  • The Hungarian notation is redundant when type-checking is done by the compiler. Compilers for languages providing type-checking ensure the usage of a variable is consistent with its type automatically; checks by eye are redundant and subject to human error.
  • All modern integrated development environments display variable types on demand, and automatically flag operations which use incompatible types, making the notation largely obsolete.
  • Hungarian Notation becomes confusing when it is used to represent several properties, as in a_crszkvc30LastNameCol: a constant reference argument, holding the contents of a database column LastName of type varchar(30) which is part of the table's primary key.
  • It may lead to inconsistency when code is modified or ported. If a variable's type is changed, either the decoration on the name of the variable will be inconsistent with the new type, or the variable's name must be changed. A particularly well known example is the standard WPARAM type, and the accompanying wParam formal parameter in many Windows system function declarations. The 'w' stands for 'word', where 'word' is the native word size of the platform's hardware architecture. It was originally a 16 bit type on 16-bit word architectures, but was changed to a 32-bit on 32-bit word architectures, or 64-bit type on 64-bit word architectures in later versions of the operating system while retaining its original name (its true underlying type is UINT_PTR, that is, an unsigned integer large enough to hold a pointer). The semantic impedance, and hence programmer confusion and inconsistency from platform-to-platform, is on the assumption that 'w' stands for 16-bit in those different environments.
  • Most of the time, knowing the use of a variable implies knowing its type. Furthermore, if the usage of a variable is not known, it cannot be deduced from its type.
  • Hungarian notation strongly reduces the benefits of using feature-rich code editors that support completion on variable names, for the programmer has to input the whole type specifier first.
  • It makes code less readable, by obfuscating the purpose of the variable with needless type and scoping prefixes.
  • The additional type information can insufficiently replace more descriptive names. E.g. sDatabase does not tell the reader what it is. databaseName might be a more descriptive name.
  • When names are sufficiently descriptive, the additional type information can be redundant. E.g. firstName is most likely a string. So naming it sFirstName only adds clutter to the code.
  • It's harder to remember the names.

Read more about this topic:  Hungarian Notation