Saturday, October 20, 2012

Closures in Java

Have you looked the proposal for Function Types and Closures (Anonymous Function Types) in Java?
You may look upon the proposal at http://blogs.sun.com/roller/resources/ahe/closures.pdf and discuss the proposal athttp://blogs.sun.com/ahe/entry/full_disclosure. You may also check the example using closures given by Neal Gafter on his blog at http://gafter.blogspot.com/2006/08/whats-point-of-closures.html.

This proposal proposes addition of function types in java. Using these function types, we will be able to create a function object inside our java code. The function objects could be passed to methods. If a method accepts an object of an interface type which has only a single method (a.k.a. single method interface eg. ActionListener, Comparator etc.) then we may pass a function type with same signature (and a return type which is same or a subtype of the return type of interface method) and Java would automatically convert a function object to an object of that interface.
The example given in proposal is:



public static void main(String[] args) {
//Created The Function Type
int plus2(int x) { return x+2; }

//Created an object of that function type
int(int) plus2b = plus2;

//invoked that function object
System.out.println(plus2b(2));
}



In my opinion, the closures will help make java code cleaner and Function Types will add a container of logic to Java language. A Class contains data as well as logic; while a function will contain only logic.
Using The FunctionTypes and closures, The OperationHandler Pattern example in my previously posted blog can be rewritten without the need of creating an interface and inner classes as follows:
static void performFileOperation(String filePrefix, File inputFile, void(String filePrefix, File inputFile)throws IOException performOperation) throws IOException {
String fileName = inputFile.getName();
if (inputFile.isDirectory()) {
fileName = fileName + "/";
}
if (inputFile.isDirectory()) {
File[] children = inputFile.listFiles();
int len = children == null ? 0 : children.length;
for (int i = 0; i < len; i++) {
performFileOperation(filePrefix + fileName, children[i], performOperation);
}
}
else {
opHandler.performOperation(filePrefix, inputFile);
}
}

public static void createZipFile(File source, File dest, int compressionLevel) throws IOException {
ZipOutputStream out = new ZipOutputStream(new FileOutputStream(dest));
out.setLevel(compressionLevel);
//Creating closure here is not better than creating an anonymous inner class
void(String, File) throws IOException closure = (String filePrefix, File inputFile) {
ZipEntry entry = new ZipEntry(filePrefix + inputFile.getName());
entry.setTime(inputFile.lastModified());
out.putNextEntry(entry);
InputStream in = new FileInputStream(inputFile);
byte[] buffer = new byte[1024];
while (in.available() > 1024) {
in.read(buffer);
out.write(buffer);
}
buffer = new byte[in.available()];
in.read(buffer);
out.write(buffer);
in.close();
out.closeEntry();
};
performFileOperation("", source, closure);
out.close();
}

public static void setLastModifiedDate(File source, java.util.Date lastModifiedDate) {
try {
/*creating closure here eliminates a need to create an anonymous inner class or a concrete class implementing interface FileOperationHandler*/
void(String, File) throws IOException closure = (String filePrefix, File inputFile) {
inputFile.setLastModified(lastModifiedDate.getTime());
};
performFileOperation("", source, closure);
}
catch (IOException ex) {
throw new RuntimeException(ex);
}
}

Please let me know of your opinions by your comments.

A Counter solution to Java 5 Enum in Java 1.4

Java 5 has a very good feature: enum. The enum keyword can be used to create C++ like enums. The enum created in Java 5 are a particular type of class whose instances are only those objects which are members of that enum. Each member of the enum is an instance of that enum class. The super class of all enum objects is java.lang.Enum and enums can not be extended. Java 5 also supports methods to be contained in an enum.
For example:
public enum MeasurementType {
Length, Mass, Time;
}

The following code works perfectly fine for the above enum and the output is shown below:
System.out.println(MeasurementType.Length instanceof MeasurementType);
System.out.println(MeasurementType.Length);
Output:
true
Length

Now, there are many situations where we need to use enums and the above stated features of enums but we can not use Java 5 features because of some requirements of the project (like many projects are created to run on JDK 1.3/1.4 because client environment does not support JDK 5 or some other reason like that). If such a case arises then many developers feel upset and devise their own method to tackle the problem at hand. Sometimes, there are situations where we have written a library using Java 5 features like enums but we need to use that library in a legacy project and port the whole code to Java 2. If we are using annotations, then we need to discard all the annotations and enums. Rest of the Java 5 features are binary compatible with earlier JVMs and we can easily port the library. But here comes the hardest thing: Discarding Enums. Annotations are still least understood and least used feature of Jaa 5 and mostly used by Framework Junkies in special type of libraries like ORM etc. But enums are very popular and there is not possible to change each and every line of code that is using enum class.
I here present a solution that can be used to replace an enum class with a Java 2 class and you need not change the code using that enum class anywhere.
Look at the following code:

public abstract class MeasurementType {
private MeasurementType() {
//constructor made private so that any classes inheriting from this class may not be created.
}

public static final MeasurementType Length = new MeasurementType() {
public String toString() {
return "Length";
}
};

public static final MeasurementType Mass = new MeasurementType() {
public String toString() {
return "Mass";
}
};

public static final MeasurementType Time = new MeasurementType() {
public String toString() {
return "Time";
}
};
}

Now the code shown above that used an enum MeasurementType works perfectly fine with class MeasurementType.
System.out.println(MeasurementType.Length instanceof MeasurementType);
System.out.println(MeasurementType.Length);
Output:
true
Length

If your enums had any methods inside them, they can be included in the abstract class created to replace the enum class. The java file containing the enum class is to be replaced by the file containing abstract class and obviously the package will be same so that all the classes using the enum may be compiled without any changes.
This way you can replace all your Java 5 enums and compile all your libraries for Java 2.

The pattern shown above is a well known solution to create enum like structures in Java. The class is kept abstract and constructor is kept private so that no classes can be created inherited from the enumeration class and no new members can be added in enum which are not included originally by the creator of the lilbrary.

Happy Porting....

Sandeep Beniwal

-------
Hi,
Thanks for reminding that I am missing values() and valueOf() methods of Java 5 Enum in my Enum class written for Java 2. Actually I was not using these methods in my little library that I needed to port to Java 2.
We can add these two methods to our MeasurementType example, as follows:

private static final MeasurementType[] values = new MeasurementType[]{Length, Mass, Time};
public static final MeasurementType[] values() {
return values;
}

public static final MeasurementType valueOf(String value) {
if("Length".equals(value)) {
return Length;
}
else if("Mass".equals(value)) {
return Mass;
}
else if("Time".equals(value)) {
return Time;
}
return null;
}

The OperationHandler Design Pattern


According to the principle of reuse, the code must not be copy-pasted from one location to another but it should be used by calling methods, composition, inheritance and refactoring existing code so that it may be reused. For maximum reuse of code, we use design patterns so that the code written by us is more reusable. There are lots of design patterns discussed already in various books and at various places like blogs and forums etc. I want to discuss a new design pattern that I have never encountered before in any book or any site. I have named this pattern as The OperationHandler Design Pattern. Before introducing the design pattern, I show you a problem, its solution and then the solution reused by copying and pasting code from existing solution and then I’ll show you both the solutions with maximum reuse using the design pattern being introduced.
Problem: We have to zip all files inside a directory recursively. We have to create a function which accepts a File object to zip and a File object to create a zip file to. If the input File object is a file, we have to directly zip it and if the input File object is a directory, then we have to zip the contents of the directory recursively.

Solution: We create a function which accepts the two File objects as in the question and call a recursive function. The recursive function accepts a File object to zip and a ZipOutputStream to write the contents of the input file.

Code:
public static void createZipFile(File source, File dest, int compressionLevel) throws IOException {ZipOutputStream out = new ZipOutputStream(new FileOutputStream(dest));
out.setLevel(compressionLevel);
createZip("", source, out);
out.close();
}
private static void createZip(String prefix, File source, ZipOutputStream out) throws IOException {
String fileName = source.getName();
if(source.isDirectory()) {
fileName = fileName+"/";
}
if(source.isDirectory()) {
File[] children = source.listFiles();
int len = children == null ? 0 : children.length;for (int i = 0; i < len; i++) {
createZip(prefix+fileName, children[i], out);
}
}
else {
ZipEntry entry = 
new ZipEntry(prefix+fileName);
entry.setTime(source.lastModified());
out.putNextEntry(entry);
InputStream in = 
new FileInputStream(source);byte[] buffer = new byte[1024];while(in.available() > 1024) {
in.read(buffer);
out.write(buffer);
}
buffer = 
new byte[in.available()];
in.read(buffer);
out.write(buffer);
in.close();
out.closeEntry();
}
Now we got another problem in which we have to set the last modified date of all files inside a directory recursively. It is human nature that tends to find a solution of a problem first around ourselves before inventing it. We also look around and copy the code in the previous example and do as in the following.
Problem: We have to set last modified date of all files inside a directory recursively. We have to create a function which accepts a File object to set the last modified date and a date.
Solution: We create a function which accepts the two arguments and calls itself recursively. To create this function we copy the code from previous example and if the input file object represents a directory we call the function recursively for the files inside the directory and if the input file represents a file then we set its last modified date.

Code:
public static void setLastModifiedDate(File source, java.util.Date lastModifiedDate) {
if
(source.isDirectory()) {
File[] children = source.listFiles();
int len = children == null ? 0 : children.length;
for
 (int i = 0; i < len; i++) {
setLastModifiedDate(children[i], lastModifiedDate);
}
}
else {
source.setLastModified(lastModifiedDate.getTime());
}
}
However the code does not seem to be copy-pasted code in the example given, we accept this example to get understanding of the pattern I am going to present.
Let us compare the code in both the examples:

// private static void createZip(String prefix, File source, ZipOutputStream out) throws IOException {
public static void setLastModifiedDate(File source, java.util.Date lastModifiedDate) {/*String fileName = source.getName();
if(source.isDirectory()) {
fileName = fileName+"/";
}*/
if(source.isDirectory()) {
File[] children = source.listFiles();
int len = children == null ? 0 : children.length;for (int i = 0; i < len; i++) {/*createZip(prefix+fileName, children[i], out);*/
setLastModifiedDate(children[i], lastModifiedDate);
}
else {
/*ZipEntry entry = new ZipEntry(prefix+fileName);
entry.setTime(source.lastModified());
out.putNextEntry(entry);
InputStream in = new FileInputStream(source);
byte[] buffer = new byte[1024];
while(in.available() > 1024) {
in.read(buffer);
out.write(buffer);
}
buffer = new byte[in.available()];
in.read(buffer);
out.write(buffer);
in.close();
out.closeEntry();*/
source.setLastModified(lastModifiedDate.getTime());
}
}
Now let me align both the problems as:
we have to perform an operation on all files inside a directory recursively. (i) The operation is to add the file contents inside a ZipOutputStream (ii) The operation is to set the last modified date of the files.
In the operation handler pattern, we create an operation handler interface for that type of operations and a recursive method that accepts the input to carry on the operation and the operation handler object. We create implementations of Operation Handler interface to perform each operation and pass necessary information to execute the operation in the constructor of the operation handler. The object on which the operation is to be performed is passed in the method of the operation handler created to perform the operation.
The previous examples can be rewritten using this pattern as:
static interface FileOperationHandler {
void performOperation(String filePrefix, File inputFile) throws IOException ;
}
static void performFileOperation(String filePrefix, File inputFile, FileOperationHandler opHandler) throwsIOException {
String fileName = inputFile.getName();
if(inputFile.isDirectory()) {
fileName = fileName+"/";
}
if(inputFile.isDirectory()) {
File[] children = inputFile.listFiles();
int len = children == null ? 0 : children.length;for (int i = 0; i < len; i++) {performFileOperation(filePrefix+fileName, children[i], opHandler);
}
}
else {
opHandler.performOperation(filePrefix, inputFile);
}
} 
static class LastModifiedOperationHandler implements FileOperationHandler {private final java.util.Date lastModifiedDate;
LastModifiedOperationHandler(java.util.Date lastModifiedDate) {
this.lastModifiedDate = lastModifiedDate;
}
public void performOperation(String filePrefix, File inputFile) throws IOException {
inputFile.setLastModified(lastModifiedDate.getTime());
}
}
static class ZipOperationHandler implements FileOperationHandler {private final ZipOutputStream out;
ZipOperationHandler(ZipOutputStream out) {
this.out = out;
}
public void performOperation(String filePrefix, File inputFile) throws IOException {
ZipEntry entry = 
new ZipEntry(filePrefix + inputFile.getName());
entry.setTime(inputFile.lastModified());
out.putNextEntry(entry);
InputStream in = 
new FileInputStream(inputFile);byte[] buffer = new byte[1024];while(in.available() > 1024) {
in.read(buffer);
out.write(buffer);
}
buffer = 
new byte[in.available()];
in.read(buffer);
out.write(buffer);
in.close();
out.closeEntry();
}
}
public static void createZipFile(File source, File dest, int compressionLevel) throws IOException {
ZipOutputStream out = 
new ZipOutputStream(new FileOutputStream(dest));
out.setLevel(compressionLevel);
performFileOperation("", source, 
new ZipOperationHandler(out));
out.close();
}
public static void setLastModifiedDate(File source, java.util.Date lastModifiedDate) {try {
performFileOperation("", source, new LastModifiedOperationHandler(lastModifiedDate));
}
catch (IOException ex) {throw new RuntimeException(ex);
}
}
The operation handler pattern is usable when ever we need to perform multiple operations recursively on a hierarchy of the same kind of objects. We may need to change the way we traverse the hierarchy for the sake of optimization or some change in implementation logic. If we copy-paste the traversing code every where and then we have to change the traversing code then it has to be done everywhere. Copy-pasting causes all the problems which occur due to redundancy. Whenever we copy paste the code, we also copy paste the bugs in it. It is difficult to track and change all the places where the copied-pasted code is used and so on.
This pattern is usable at a variety of places like: to perform operations on or collect data from a tree like structure, to perform attach/detach operations on persistent objects in an ORM implementation, to set properties of a tree recursively (for example to enable double buffering on a component hierarchy).
The Operation Handler Design pattern can be further extended for various call backs in Operation Handler interface like operationStarted(…), operationCompleted(…) etc. and a progress monitor can be passed inside the recursively called function which may be informed from inside the function and so on.
The Operation Handler Pattern is very useful in cases where there are many functions for similar kind of functionality with little differences sharing a lot of code. This pattern may be used in such places to avoid copy-paste and to enhance the quality of code and to encourage reuse.
The cons of The Operation Handler Pattern:
1. Not all the functionalities share the same set of exceptions being thrown. As in the given example, the function to zip the files throws IOException but the function to set last modified date need not throw it. Therefore handling of the IOException inside the LastModifiedOperationHandler is unnecessary overhead of this pattern.
2. There may be necessity to keep extra variables in the operation handler interface’s methods which may not be used in all operations. For example, the file prefix passed in method performFileOperation(…) of FileOperationHandler is not used in LastModifiedOperationHandler.
Considering the above overheads of this pattern, we should use this pattern only where large amount of code is being copied and pasted and it is more prone towards changes and therefore must be shared instead of copying and pasting. Or if the set of exceptions and required objects in perform operation method of operation handler interface is almost identical for most of the operations performed using the pattern.
Let us explore another example where The Operation Handler pattern is used to generalize a number of tasks which would share a lot of copy-pasted code otherwise.
Problem:
1. An Order object is reviewed by an official and it is then approved or cancelled. We have to mark the Order as approved or cancelled on click of a button etc.
2. When we mark an order as approved or cancelled, we have to mark the same for all the orderLineItems as well as for all related order objects (found according to some relation). 
Similarly when the Order is audited, we have to mark the order and its orderLineLitems as audited.
3. Whenever we mark an order as approved, cancelled or audited, we have to provide information like remarks, date, userName etc. to set fields like approvedBy, approvedOn, cancelledOn, cancelledBy etc.
Solution:
Instead of copy-pasting the code to iterate Order and OrderLineItems we create an OperationHandler and a recursive function (or a set of functions) to perform the operation. Now we can create multiple implementations of the OperationHandler for marking the Order/OrderLineItem as approved / cancelled / audited to reuse the iterating code.
Now consider the case where we have to mark not only the order objects but we have to also code for marking Tender and TenderLineItems, Contract and ContractLineItems, Invoice and InvoiceLineItems for the same (approved/cancelled/audited etc.) Now we can reuse the same code written for Order by creating Document and DocumentLineItem interfaces and modifying the Order problem for Document and DocumentLineItem. Now Order, Invoice, Tender and Contract all can implement the Document interface and OrderLineItem, InvoiceLineItem, TenderLineItem and ContractLineItem all can implement the DocumentLineItem interface.
Thus we see that we can reuse a large amount of code by using The Operation Handler Design Pattern in our daily routine problems. Our objective must be to completely prohibit the copy-pasting of the code in our organization and to encourage as much code reuse as possible.

Modularization


Once there was a time when software development was considered as a solitary effort and a programmer could sit with a design and go on with line after line of code. But modern software development is no longer a one man show. In fact software development is a collaborative effort, involving people with different skill sets to combine their expertise to produce a working product. Modern systems are large and involve various tasks as part of the development process. Various teams handle different aspects of a problem and develop subsystems which are later integrated to produce a complete system. To use all teams/people at their maximum productivity and to maximize the reuse of the code, a key principle of software engineering is to design in small, self contained units, called components or modules. When a system is created this way, it is called modular.
Modularization is the process of breaking a task into subtasks. The Modularization is the process of breaking down a problem / task / project / system into increasingly modular (which can be broken further) parts, revolutionizing the business models on which the aggregation of those small parts was based. The parts are broken on the basis of functionality or logic or the independence or their usability in other modules. The modularization also increases the reusability of the code and components. Often different modules may be used to create new products when aggregated with small / no changes.
The goal of modularization is to have each component meet following conditions:
  • Purposeful: A component fulfills a particular objective for which the component is created. We should avoid creating multipurpose components and break them into smaller single purpose components.
  • Small: The size of a component should be kept small and it should consist of an amount of information that a human can easily understand and maintain.
  • Simple: A component should be kept as simple as simple. The motive of modularization is to reduce complexity as most of the bugs arise because of the complexity of problems reflected inside the code.
  • Independent: A component performs tasks isolated from other components. A component’s objective should be independent of the objectives fulfilled by other components. A component may use other components’ public API as well but a component’s objective must not be to complement another module only. If it is so then it should be created as a subcomponent of the principal component whose objective is being complemented by this component.
A component (module) can be dependent on other components (modules) but the dependency should be well defined and a contract should be defined between the two components (modules) in terms of interfaces and methods etc. which should not get changed. If a component changes its public API, then all the components dependent on it will be broken.
In java a component should be designed in form of interfaces and the implementation of methods which are general to all kinds of the implementations of the interfaces should be done in abstract classes. One such example is the Collections framework of java which is designed as a set of interfaces and abstract classes. The interfaces and the methods specified inside the interfaces are the contract to which the component will abide. Other components may use the implementation of the interfaces without even knowing about the actual classes which implement the interface. This kind of design is also known as design by contract (D B C).
A component (module) is managed as a single unit. While working in a programming language, a component is managed as a single compilation and deployment unit. In java, a component is released in its own jar file. The Object Oriented Models and UML diagrams etc. are designed and managed at component level. There are separate build tools like maven, ant etc. (And IDEs like JBuilder, Eclipse etc.) which allow you to work with separate components as separate compilation units.
Another requirement for a component is independently testability. A component should be independently testable (provided that the components, on which it depends, are already tested and released) and should not be depending upon components on which it is not directly dependent.
If a component is isolated from the effects of other components, then it is easier to trace a problem to the fault that caused it and to limit the damage the fault causes. It is also easier to maintain the system, since changes to an isolated component do not affect other components. And it is easier to see where vulnerabilities may lie if the component is isolated. We call this isolation encapsulation.
Information hiding is another characteristic of modular software. When information is hidden, each component hides its precise implementation or some other design decision from the others. Thus, when a change is needed, the overall design can remain intact while only the necessary changes are made to particular components.
I read somewhere that Encapsulation is the "technique for packaging the information [inside a component] in such a way as to hide what should be hidden and make visible what is intended to be visible."
In good software, design and program units should be only as large as needed to perform their required functions. There are several advantages to having small, independent components.
  • MaintenanceIf a component implements a single functionality, it can be replaced easily with a revised one if necessary. The new component may be needed because of a change in requirements, hardware, or environment. Sometimes the replacement is an enhancement, using a smaller, faster, more correct, or otherwise better component implementation. The interfaces between this component and the remainder of the design or code are few and well described, so the effects of the replacement are evident.
  • UnderstandabilityA system composed of many small components is usually easier to comprehend than one large, unstructured block of code.
  • ReuseComponents developed for one purpose can often be reused in other systems. Reuse of correct, existing design and code of components can significantly reduce the difficulty of implementation and testing of new systems.
  • CorrectnessA failure can be quickly traced to its cause if the components perform only one task each.
  • TestingA single component with well-defined inputs, output, and function can be tested exhaustively by itself, without concern for its effects on other modules (other than the expected function and output, of course).
  • Salability. A component or some components in aggregation (not the whole product) can be used to create a new product or may be used to solve a different problem and thus may be sold out individually, therefore increasing our customer domain. Thus creating/extracting components which are independently sellable out of a system can increase our revenue and may prevent the rework required to do for new software.
A modular component usually has high cohesion and low coupling. By cohesion, we mean that all the elements of a component have a logical and functional reason for being there; every aspect of the component is tied to the component's single purpose. A highly cohesive component has a high degree of focus on the purpose; a low degree of cohesion means that the component's contents are an unrelated jumble of actions, often put together because of time-dependencies or convenience.
Coupling refers to the degree with which a component depends on other components in the system. Thus, low or loose coupling is better than high or tight coupling, because the loosely coupled components are free from unwitting interference from other components.
(The above portion of this blog article is noted from the article at:http://www.phptr.com/articles/article.asp?p=31782&seqNum=5&rl=1)
Compile time dependency on another module is the highest degree of coupling. If a component is dependent on another system, it must be dependent only on the component’s public API (interfaces) at the compile time. It must not care about the implementing classes of the interfaces and the implementations should be made available at the runtime by the runtime environment through dependency injection or by the code written to integrate the system.
Coupling of components should be well thought upon. And a component should use only as less part of another component as possible so that the effect of changes in other components is minimized on the work of our component.
We should never copy paste piece of code or a method etc. because copied and pasted code means copied and pasted bugs and a possibility of getting the bugs not fixed everywhere the copy-paste process has propagated them.
Usually the component design is accomplished by senior developers which have much experience and understand the benefits and losses of less and excess modularization. But the implementation of the components is designed by novices and therefore poor code is generated which gets poorer as the time elapses because of changes made by various persons (It is a fact of I.T. industry that code is seldom maintained by its creator). There must be some general practice while working inside a well defined module so that the code generated is understandable, simple, and maintainable and maximum reused.
In general we can follow following guidelines while coding to keep our code clean and maintainable:
  1. If a class has a lot of public methods and the class may be extended by another class, then an abstract super class and/or an interface should be extracted from the class.
  2. If a class has some static methods which are also being called from outside of the class and the purpose of the class is not to hold those static methods only, then a new utility type class should be created as much above in the component hierarchy as possible, and all the static methods should be shifted to that utility type class. The constructor of the utility class may be kept private and all the other classes may call the static methods by using the utility class name or by importing statically the methods of the utility class (static import facility is available in Java 5.0). This pattern is also known as static class pattern.
  3. If a class has methods that perform some general work but also contain code to locate some resources etc. also (like a method to perform database query also contains code to create database connection or a method to publish to a JMS topic also contains code to look up the destination) then the resources required should be accepted as parameters to the method and the code to look up such resources should be shifted to places from where those methods are being called. This would make those methods more general and in most cases you will find that such methods, which were an instance method before such refactoring, can be made static now. Also if the no. of such methods grows inside a component then they can be moved to a utility type class. Also utility class may be broken in several classes according to the classification of utility methods which are contained in it.
  4. Instead of copying and pasting code from one class to another, you will find that the code may be reused by refactoring the existing classes and using inheritance or composition or by creating utility classes. Never copy-paste code by deferring such refactoring for the sake of pressure and priority of work at hand or only because you are in haste. When the complexity and size of classes grows, the refactoring of the code becomes more difficult and sometimes impossible (impossible in the means that creating the component from scratch seems more economic than refactoring the existing one).
  5. When you are coding, do not use small variable names like a, b, c, x, y, in, out, index etc. and use the names from the problem context instead. E.g. chargeAmount, chargedQuantity, orderIndex etc.
  6. Use code inspection tools like PMD (http://pmd.sourceforge.net), FindBugs (http://findbugs.sourceforge.net) and Lint etc. The errors and warnings reported by these tools and the best coding practices against each error/warning described in the documentation of such tools will give you better idea of good and bad coding.
  7. Do not be afraid of refactoring. Use built in facility for refactoring in your IDE as much as possible. If you have to refactor a large piece of code, then you may use tools like RefactorIt, etc. (which may be available as freeware/shareware/trialware or commercial).
  8. When ever you find that a process which has a definite input, definite output and a definite process to carry out the result is embedded inside a piece of code which is intended for something else (and using this as a part) then this should be extracted in a separate method/class/component as applicable.
The use of refactoring tools and code inspection tools regularly on your code will improve the quality of the code you produce and apply the rules of modularization to a greater extent.

Some comments from my old blogs :--

Anonymous said...
You forgot to give credit to where much of this article was taken from.

Wednesday, August 27, 2008

Hello,
When games like 'Olympics' comes we look towards how we get medals and how we can beat the other countries players and athletes. and media discuses about their life when they in lime light. but i have enclosed a small interview of a American gold winner. and by reading this we will know how they are preparing for games and technology they use for it. I believe in India players are just doing practicing. but is that in right direction? or Players did really know what to exactly to get high performance.(we have a single thumb rule for sports in India eat well and hard practice. But it time to smart practice too.) . What you think how many professional runners are using smart running shoe equipped with technology. and how much players see the actual play ground before be on battle . how much players study the last years performance of competitors like in video and analytic of them. All these are not blames these are just some points we need to look into.

rest you can under stand with the interview downwards:-

Google Earth is getting a nice plug from Olympic Gold Medal cyclist Kristin Armstrong. When she did her time trials in December, 2007 in China, she took along her husband’s GPS unit to capture the elevation along the route. Then she used that data to find the best training route back home. In a guest post on the Google Lat-Long blog, she writes:

After returning home to Boise, Idaho, I exported the GPS data to several different formats, one of which I was able to launch with Google Earth. I was then able to trace the entire course from the comfort of my home half a world away and find a similar route to train on back in Boise. This capability along with having the elevation profile proved invaluable in my preparation for my Gold Medal race.