Retro video games delivered to your door every month!
Click above to get retro games delivered to your door ever month!
X-Hacker.org- The Guide to Clip-4-Win version 3.0 - http://www.X-Hacker.org [<<Previous Entry] [^^Up^^] [Next Entry>>] [Menu] [About The Guide]
  
  This section tries to get the essential ideas across quickly -
  sorry if this annoys any OO "purists".  (Insert "usually",
  "often", "implementations typically provide", etc., as
  appropriate.)
  
  In some ways, Object Orientation is not much different to
  "traditional" programming - depending a bit on the way you
  program now, of course.  Importantly, the emphasis with OO is
  to focus on the data: what the data items are, how they're
  built up, what behaviours make sense, the interactions that
  occur, and so on.  (By "data" is meant all information used by
  an application: in files, in memory, etc.)
  
  An item of data is typically composed of one or more simpler
  data items, which are themselves either built from simpler
  data still or are values of the primitive (fundamental) data
  types provided by the language.
  
  The data items that form part of another data item are known
  by various names, such as: attributes, properties, instance
  data, or instance variables.
  
  Data has to be processed (handled) somehow: typically by
  programs, which are built from things like functions and
  procedures.  You should think of OO as being primarily about
  data and data structures, and secondarily about functions to
  manipulate the data.  The alternative way of looking at things
  is known as procedural - the emphasis being on what the
  operations (functions) are, with the data downgraded in
  importance.
  
  It's unfortunate that the term procedural is used, as it makes
  it sound as though procedures/functions aren't used in OO.
  This isn't the place to discuss issues of terminology,
  however...
  
  Some people also use the term procedural to mean the opposite
  of event driven, i.e. to distinguish the way typical DOS
  applications are structured from the way that GUI applications
  need to be structured.  Since typical DOS applications make
  the user do things in a step-by-step fashion dictated by the
  programmer, "procedural" is a pretty good description,
  although you might prefer to use "modal".
  
  As mentioned above, OO recognises that data inherently needs
  to be manipulated, just as structured programming and other
  paradigms/methodologies recognise.  OO allows you to define
  operations which are specific to the data type being
  manipulated.  These are known as methods, and are essentially
  the same as the functions and procedures of traditional
  programming languages.
  
  In OO, the data items (properties) and methods together make a
  class - also known as an abstract data type ("abstract"
  because it's an abstraction to a higher conceptual level).
  So, a class is a description of, or template for, a new kind
  of data and the operations on it.  Notice that the class
  itself isn't the data, it's a description of how to make a
  data item and how it would behave.  Such a data item is called
  an object.
  
  As an example, your program might have a stock class, with
  objects such as nuts, bolts and nails.  The attributes
  (instance variables) of each stock object might include
  quantity, price and supplier.  Presumably supplier would be
  another class, and so on.  The methods for the stock class
  might include ChangePrice, ChangeDescription, and
  ChangeQuantity.
  
  Making an object is called instantiating the class - the
  object is an instance (occurrence) of the class.  This
  terminology is easily confused with the instance data or
  instance variables introduced earlier.  These are so named
  because each instance of a class (i.e. each object) has its
  own storage space for its data items.  So, the above nut, bolt
  and nail objects are each instances of the class stock.  Each
  of the objects has its own instance variables for quantity,
  price and supplier.
  
  If there is some code that's there to carry out instantiation,
  it's known as a constructor.  Exactly what's allowed/needed in
  a constructor depends on the OO system, but the least you can
  expect is the ability to initialise instance variables.
  
  Some OO systems allow you to define a destructor - very useful
  for things like closing files and releasing resources.  You
  may be constrained as to what you can do in a destructor,
  particularly if it's invoked from some internal part of the
  language run-time (e.g. a garbage collector).
  
  The names of instance variables and methods are collectively
  known as the messages understood by an object.  People often
  talk about "sending a message to an object" (applying the
  message to the object).  The effect will be to get or set the
  value of an instance variable, or invoke a method.
  
  OO encourages the use of the same method name in each class
  for what is conceptually the same operation.  This is called
  polymorphism, and is effectively the same as the function
  overloading present in some languages.  In the above example,
  instead of ChangePrice, ChangeDescription and ChangeQuantity,
  you might have decided that Price, Description and Quantity
  should be objects, and have given each of them a Change
  method.  Of course, if Price, Description and Quantity are all
  easily represented using the standard data types of your
  language (numbers, strings, etc.), this may well be overkill.
  
  If it helps, most languages have operator overloading, e.g.
  "+" often means "add two numbers" or "concatenate two
  strings", depending on the operands.  This makes "+" a
  polymorphic operator.  Polymorphic methods are an extension of
  the same idea.
  
  To make polymorphism possible, something must know the class
  of an object, so that the appropriate method can be used.
  This "something" is the language compiler, or the run-time, or
  both.  If the compiler resolves the method uniquely it's
  called early (or static) binding; if it's done at run-time
  it's called late binding.  You shouldn't be surprised that
  early binding leads to faster programs.  However, late binding
  has its advantages, too - e.g. in data-driven applications you
  might like to be able to create and use new classes at run-time.
  
  Polymorphism isn't always given the recognition it deserves,
  but used properly it can dramatically reduce some code.  This
  is because it allows generic (general purpose) code to be
  written.  When that code needs a particular piece of
  functionality, instead of using a lot of IF's or CASE's the
  code can use a polymorphic method name.  As mentioned,
  something has to resolve the method, but if you can cope with
  the overhead involved the saving in code makes it worthwhile.
  Furthermore, the typical sequence of IF's or CASE's needs
  maintenance as the application is changed, and it's very easy
  to fail to change one or more of the IF's or CASE's scattered
  throughout the application.  With polymorphism no such
  sequence of IF's/CASE's exists, so no change is required.
  
  As with other programming, it's useful to distinguish between
  the interface provided by some code (here, a class), and its
  internal implementation.  Ideally, the interface should be
  simple, consistent, clear, and so on, and you should be able
  to use the class without caring about the details of its
  implementation.
  
  This has a number of advantages.  You can change the
  implementation without having to change code using the class.
  With more than one developer you may be able to specify the
  interface and then develop in parallel.  This separation of
  interface and implementation details is known as
  encapsulation, or information hiding.  The point of the class
  interface is to specify what the class does, not how it does it.
  
  Later sections deal with choosing classes, i.e. Object
  Oriented Analysis/Design (OOA/D), but for now the important
  thing to concentrate on is that a class should be an
  uncluttered, coherent, self-contained representation of
  something meaningful to you and your application.
  
  Sometimes a class is clearly very similar to another, but with
  some differences.  With experience you learn whether to make
  just one class, with some internal data (instance variables)
  to influence its behaviour, or to make more than one class.
  OOA/OOD can help you decide, too.  In many cases it makes
  sense to have more than one class, and OO provides inheritance
  so that you can put the fundamental data and methods in one
  class, then inherit and add the new data/methods.  The
  original class is called the super or parent class, and the
  new one is a sub-class, derived class, or child class.  The
  terms ancestor class and descendant class can be used to refer
  to classes which may be more than one level away in the class
  hierarchy.  A class with no parent is a base class.
  
  The nice thing about inheritance is that you don't copy the
  original class using an editor - in effect you just say
  "inherit this new one from that existing one".  This almost
  always reduces code maintenance, as any changes are confined
  to one place.  It can also lead to code which is more easily
  understood, as there's less to look at.  However, if you use
  lots of inheritance, e.g. B inherits A, C inherits B, D
  inherits C, etc., figuring out what's in D can be quite
  painful.  A good rule of thumb is to avoid too many levels of
  inheritance: perhaps a practical limit is 5 to 7, but you have
  to use your judgement.
  
  Using inheritance to make a new class, then adding new
  properties (data) and/or methods, is sometimes called
  programming by differences, or controlled complication.
  
  The simpler, more generic class is the one you inherit from.
  Consequently, it's the sub-class that is more specific.  To
  put this another way, the classes near the top have the least
  data and behaviour defined, and are consequently open-ended.
  Your sub-classes add more data and behaviour, becoming more
  concrete in the process.
  
  Sometimes you create a class that is intentionally inherited
  but never instantiated.  In this case, the class is an
  abstract class.  Some OO systems allow you to specify that a
  class is an abstract class to assist in documentation and to
  prevent anyone instantiating the class in error.
  
  Ideally, neither sub-class nor super class will mention the
  other by name (except that the sub-class needs to name the
  class it's inheriting).  You don't want to have a class naming
  other classes for two reasons.  Firstly, you may want to add
  more classes above, below or in the middle of the existing
  ones, and you'd then have a maintenance problem with each of
  the names already used.  Secondly, having to name other
  classes tends to mean something's wrong: inheritance is
  supposed to be clean, even obvious, so you shouldn't need to
  be specifying class names to control what's to happen or
  how/when it's to happen.
  
  However, there's a more complex, potentially risky, form of
  inheritance - which is not supported by all OO languages.
  This is multiple inheritance, where a sub-class inherits more
  than one super class.  You can argue that this is perfectly
  reasonable, and you can argue that its problems aren't worth
  risking.  These problems all stem from the possibility that an
  instance variable and/or method in one super class also exists
  in another super class.  Some OO systems provide access to
  both versions of the inherited instance variable/method, in
  which case you'll have to specify which one you want in any
  particular piece of code.  Other OO systems only keep one of
  the definitions, which may or may not be what you want.  You
  might even prefer the language to remove all such conflicts,
  forcing you to define exactly what you want.  However, read
  on...
  
  Multiple inheritance brings with it the possibility of
  repeated inheritance, where a class inherits an entire class
  more than once.  For example, if B inherits A, C also inherits
  A, and D inherits B and C, you've got D inheriting A twice.
  In this case, removing all conflicts is unlikely to be useful.
  
  The combination of polymorphism and inheritance is what leads
  to the OO claim of code reuse.  However, if you rush to make
  lots of classes you're unlikely to get any such thing!  The
  most reusable classes are generally those which are easily
  understood and whose interface isn't going to change.
  Discovering such classes, specifying and documenting them,
  implementing and testing them, and putting them into a class
  library is no small task.
  
  Incidentally, you can use OO techniques - especially OOA/OOD -
  in just about any language, but you need to simulate
  inheritance, polymorphism, etc. if the language doesn't
  already supply OO facilities.  This can be hard work, tends to
  result in code which is harder to write and understand, and is
  certainly not recommended for OO beginners.
  
  You can also use an OO language, but produce code which is
  procedural - despite your best intentions.  This is especially
  easy for experienced developers faced with tight timescales.
  Managers/customers need to realise this, as do developers.
  
  The most reliable way to get used to using OO seems to be to
  start with a simple application and an undemanding timescale.
  If you can get an experienced OO user to guide you, so much
  the better.  Books are invaluable, if you're the kind of
  person prepared to work at them (a book list appears later).
  User groups sometimes organise OO sessions, too.
  
  The kinds of applications that benefit from OO most are
  usually those where the structure of the data doesn't change
  as much as the ways you want to process the data.  You can
  argue that many business applications are like this.
  
  Perhaps you're wondering about the connection between OO and
  windowing systems?  No such connection has to exist.  Objects
  provide a convenient place to put data about windows, menus,
  etc., but they're not necessary.  Indeed, almost all
  successful GUI applications are written without using objects.
  That's not to say you should ignore OO - just don't believe
  the hype.
  
  Although it's vital to get involved with OOA/OOD, first let's
  look at enough syntax used by TopClass / VO to be able to cope
  with later examples.


  

Online resources provided by: http://www.X-Hacker.org --- NG 2 HTML conversion by Dave Pearson