Search This Blog

Thursday, August 5, 2010

Atlassian Stack: FishEye (auto start)

Today's ticket is to rig up FishEye so that it starts at boot time.  I scoured the FishEye docs and googled but came up empty.  There is a Mac OS X solution and one for an older version of FishEye and CentOS.  Nothing for Karmic Koala.  My solution?  Emulate what was done for Jira.  I copied /etc/init/jira.conf and created /etc/init/fisheye.conf.  My resulting file looks like this:
# fisheye

description     "Atlassian FishEye - Source control viewer"

start on runlevel [2345]
stop on runlevel [!2345]

kill timeout 30

env RUN_AS_USER=fisheye
env BASEDIR=/opt/fisheye

script
    LOGFILE=$BASEDIR/fisheye.out.`date +%Y-%m-%d`.log
    exec su - $RUN_AS_USER -c "$BASEDIR/bin/start.sh" >> $LOGFILE 2>&1
end script


The solution does not appear to be perfect, however.  If I issue sudo service jira stop, I get this: jira stop/waiting.  If I issue sudo service fisheye stop, I get this:  stop: Unknown instance.  Not good. I don't want to just kill the process when I shutdown the box.  Who knows what type of data corruption will occur?  I need to figure out what is wrong with my script and fix it.  One difference is that the other Atlassian pieces bundle their own copy of Tomcat and that is what we invoke in the start scripts.  FishEye uses something else. 

I googled about Upstart and decided that I won't ever fully understand how it works.  So, I've decide to make two upstart scripts.  One that controls starting up FishEye and boot time (fisheye-start.conf) and one that controls shutting down FishEye during shutdown (fisheye-stop.conf).  Perhaps not the most elegant solution but it appears to work.  All I'm doing is calling start.sh in one and stop.sh in the other.  Nothing fancy.

How Do I Use Spring Security To Secure Spring MVC and Spring BlazeDS In The Same Application?

Context: I have a single web application that is based on Spring 3.  It serves up RESTful resources using the new Spring MVC annotations and it works fine.  After some head scratching and research, I finally figured out the configuration required to lock down the Spring MVC calls with Digest Authentication via Spring Security 3.  I then wanted the ability to provide access to the same set of services to our Flex clients using AMF so I added Spring BlazeDS Integration to the mix.  Remoting a service via Spring is almost trivial.  Add an annotation or two and you are good to go.  Adding security to the mix is almost as easy.  If you follow the directions on how to lock down your AMF channels, it'll work fine.  The problem I ran into is that set of security filters set up by the Spring BlazeDS Integration directions interfere with the set of filters needed by Spring MVC.  The solution?  Watch things in the debugger in a working environment, reverse engineer the required filters for Spring BlazeDS and then specify them by hand in your Spring Security set up.  Spring Security uses a chain of filters to apply authentication logic to servlets.  Spring MVC and Spring BlazeDS get their own instance of DispatcherServlet in the application.  You then apply the required filters to the appropriate servlet.  My solution was to break up the Spring Security beans into three files:

  • common-security-context.xml - holds beans that are common to both Spring MVC and Flex authentication
  • mvc-security-context.xml - holds the beans specific to authentication of the RESTful API
  • flex-security-context.xml - holds the beans specific to authentication of the BlazeDS calls
What you should end up with is the ability to invoke the service in two ways: one as a RESTful resource and one as a Flex Remote Object.  In each case, providing the same set of credentials should authenticate you.  In summary, relying on the Spring Security namespace to set up your security environment does not work if you are combining Flex and Digest authentication: you need to set things up by hand.

Here are my versions of the files in question.  I hope that it saves you some time and effort.

  

Monday, August 2, 2010

Atlassian Stack: FishEye

Today's mission: install FishEye, the source control repository analysis tool. Quoting the online docs: "FishEye unlocks Subversion, Git, Perforce, Clearcase, CVS, and Mercurial with real-time notifications of code changes plus web-based reporting, visualisation, search and code sharing." I'll be installing 2.3.5 Standalone edition.

  1. create a new user account and group named fisheye
  2. add the fisheye account to the subversion group -- it needs read only access to the Subversion repository 
  3. extract the fisheye zip into /opt
  4. make a softlink from the new installation directory to /opt/fisheye
  5. change ownership of the installation directory over to fisheye:fisheye.
  6. mkdir /opt/fisheye-home
  7. change ownership over to the fisheye account
  8. export FISHEYE_INST in /etc/environment so that it points to /opt/fisheye-home
  9. copy /opt/fisheye/config.xml to /opt/fisheye-home
  10. logged out and then back into the account to make sure that the environment variable was set correctly
  11. as the fisheye account, start fisheye: sudo -u fisheye /opt/fisheye/bin/run.sh
  12. connect to http://localhost:8060/ and run through the wizard
  13. had to do something odd, in order to get the server id I had to start with the "obtain evaluation license" option and then back out and do the "enter existing license option"
  14. Added the Subversion repository by specifying svn://localhost/opt/svn
The initial installation is done.  Next step, integrate it with the rest of the stack.

Friday, July 30, 2010

Spring Security 3 and Digest Authentication

This was a tough one so I thought I'd take a few moments and share what I found.  Context: I'm using Spring 3 as the basis for a set of RESTful services.  The new MVC enhancements makes supporting a RESTful API pretty simple.  What I wanted to do what to lock down the API a bit and turned to Spring Security 3 to make that happen.  Not wanting to enable HTTPS, at least not yet, I turned to Digest Authentication which provides a reasonable authentication mechanism without having to hassle with digital certificates.  Getting Spring Security enabled and, at least initially, authenticating with Basic Authentication was a breeze.  Once I tried to enable Digests Authentication, things got weird.

 Luckily, I was able to locate two resources that steered me down the correct path.  One is the book, Spring Security 3 and another was an article on DZone titled "Using Spring Security to Enforce Authentication and Authorization on Spring Remoting Services Invoked From a Java SE Client".   The upshot is that you cannot rely on the simple namespace configuration that is almost always talked about in the Spring Security 3 literature.  It assumes a JSP type of environment with form login screens et al.  In my case, I'm using just enough of the web stack to allow for HTTP based manipulation of the server-side resource -- no gui.  This means you have to configure things by hand so it'll suit your needs.   Here is the Spring context file I used to enable Digest Authentication for my RESTful services.

Since this was a prototype, I'm relying on the in-memory credential store but you should be able to wire up some more suitable for a production environment.  What I wanted to do was to authenticate at the web layer but authorize at the services layer.  The authentication is handled by the Digest Authentication pieces and the authorization is handled by annotations in service interfaces.  For example:
public interface EchoPort
{
    @PreAuthorize( "hasRole( 'ROLE_ADMIN' )" )
    EchoResponse echoMessage( final EchoRequest request ) throws EchoError;
}

I used Apache HttpClient 4.0.1 for testing and was pleasantly surprised to find that it handled both Basic and Digest authentication for me.  All I had to do was to provide the credentials prior to making the call the HttpClient handled the rest.

theClient.getCredentialsProvider().setCredentials( AuthScope.ANY, new UsernamePasswordCredentials( defaultUserName, defaultPassword ) );

I hope this saves you a bit of time.

Thursday, July 29, 2010

Atlassian Stack: Confluence (integration)

Today's goal is to integrate Confluence with Jira and Crowd.  In the Jira admin doc I found a section on integrating Jira and Confluence.  I'll run through that.

  • configured quick search to point back to the Jira URL
  • configured track back, had to also go into Jira
  • user Confluence gadgets in Jira (I those to use trusted applications instead of oauth). I had to configure trusted application on both the Jira and Confluence sides.  After reading the instructions, it became clear that adding gadgets would be a pain -- you have to add them individually.  I've decided to add gadgets only when/if I decide they might be useful.  Enabled the atlassian-gadgest-shared plugin in Confluence
  • enable Jira linker plugin by entering the download URL of the plugin into the Confluence plugin manager screen
  • after a bit of searching I found the Confluence section that describes how to integrate Confluence into Crowd. I did not build a new directory.  I just reused the one I made for Jira instead.  I made the required confluence groups and made one of the users a member of both groups.  copied the jar files and edited the configuration files as instructed. restarted confluence.  Logged in with a crowd user and enabled External user management so that you have to manage users via Crowd.
  • add the "userful extendsions" described in the instructions.  Of the 3 suggested, I could only install the Jira Confluence portlet because the other two wanted me to authenticate for access to accounts I don't have.  Oh, we..
Not as a clean integration as I would've liked. Much of it appears to be plugin driven, which have to install yourself.

Wednesday, July 28, 2010

Atlassian Stack: Confluence (auto-start)

I had to search on "confluence auto start" to find the page that describes how to auto start Confluence under Linux.  Like the other products, the note from Daniel Harvey describing how to configure things for Karmic are the steps you want to follow.  Make sure to make adjustments based on the link so that Crowd starts before Jira or Confluence.  For me, that meant a trip back into the crowd.conf file.  I rebooted the machine and verified that Confluence had started for me.  Easy as pie.

Tuesday, July 27, 2010

Atlassian Stack: Confluence

GreenHopper says that I have to install Confluence so that is what I'll be doing today.  Dragging the Confluence card over from To DO to In Progress gets me started.  I'll be following the Confluence installation guide using the standalone directions.

  1. download and extract the standalone version into /opt
  2. make a soft link to the new directory so I can get to it via /opt/confluence
  3. the docs says that for Debian based Linux you might need to grab some X11 libraries.  Just to be safe, I'll follow their advice and install them: sudo apt-get install libice-dev libsm-dev libx11-dev libxext-dev libxp-dev libxt-dev libxtst-dev. Installation failed because I already have the libs installed (I'm running the desktop version of Ubuntu so the X11 stuff is already available).
  4. created a confluence user and group
  5. created a directory to hold the Confluence data: mkdir /opt/confluence-home
  6. changed the owner ship of that directory and the confluence installation directory to be owned by the confluence account
  7. edited the confluence-init.properties to point to /opt/confluence-home
  8. Jira is running on ports 8005 and 8080 which is Confluence's default port, so I have to edit the server.xml file to use a new port.  I'll go with 8000 and 8090.  
  9. next, I've got to wire up Confluence to our mysql instance
  10. I chose not to modify the MySQL instance to specify InnoDb for fear of corrupting the existing databases.
  11. using Webmin, I created an new MySQL user named confluenceuser and gave it full privs.
  12. using Webmin, created a new database named confluence making sure to select UTF-8 as the character set and utf8_bin as the collation order.
  13. start up Confluence, connect to http://localhost:8090/ and run through the wizards
  14. I elected to start with an example site instead of an empty one
Everything went off without a hitch.  I did realize, however, that a task needs to be added to have Confluence auto-start at boot time.  The question is, what is the best way to add that story in Greenhopper?  Specifically, how can I make the task dependent?  I decided to go into the "install Confluence" story and make a sub-task.  That new task ended up in my To Do list.  Easy.  I found a workflow issue: I was allowed to resolve the master task despite having two open sub-tasks.  There must be a setting somewhere that prevents that.

    Monday, July 26, 2010

    Atlassian Stack: Subversion and Mercurial Installation

    I've got a few moments so I figured I would take an easy task and install Subversion and Mercurial.  I'm not sure if the Atlassian stack supports Mercurial but I'm going to install anyway, just in case.  In Greenhopper, I moved the two tasks from the to-do list into my in-progress list by simply dragging and dropping the two cards from one column to the other.  I was in the Task Board view at the time.   Using Synaptic, I installed the subversion, subversion-tools,  mercurial and mercurial-server packages.  The mercurial server installation failed with a post-install trigger failure.  Since I'm not even sure if I can use that server, I decided not to worry about the problem.  I created a subversion user and group who will own the Subversion repository area.  I made my local account a member of the subversion group.  As my local user, I created the Subversion repository: svnadmin create /opt/svn.  I then changed the ownership over to the subversion account and made the group bit sticky: sudo chown -R subversion:subversion /opt/svn followed by sudo chmod -R g+s /opt/svn.  Lastly, I modified a copy of the /etc/init/jira.conf  so that it would start Subverion in daemon mode at startup.  I tested the server by using Subversion over SSH: svn list svn+ssh://localhost/opt/svn.  I created a mercurial user and group and added my local account to the mercurial group.  I then created a directory that will hold the various mercurial repositories making sure to set the ownership to mercurial:mercurial and making the group bit sticky.  I'll admit that I didn't do any real testing but I think that when we discover an issue in the future, I'll be able to use the bug tracking features in Greenhopper.  I went into GH, logged the amount of time I worked on each issue and the resolved the task.

    Atlassian Stack: GreenHopper Configuration

    I'm following the GreenHopper 101 document to make sure that the basic configuration is correct.

    1. install the recommended Labels plugin so I can make use of Epics
    2. shutdown Jira: sudo service jira stop
    3. copy the Labels plug into Jira: sudo -u jira cp jira-labels-plugin-2.4.jar /opt/jira/plugins/installed-plugins/
    4. restart jira: sudo service jira start
    5. follow the directions for configuring Jira for Scrum
    6. create a new project category:Installation
    7. create a new project: Atlassian
    8. set the project to use the Scrum template
    9. add Kanban constraints to the project (I don't see the view that the instructions say I should have). I noticed that the Task view wasn't coming up and the error message said it was because the project didn't have a release date.  I added a version to the project and gave it a release date.  Now I can get into the Task view but I don't see the Compact (Kanban) they say I should see.  I looked and Googled but never found anything.  Eventually, I gave up. I might take advantage of my support and see if they can help.
    10. I had to create a new dashboard based on the default dashboard before I could add the Agile gadget indicated by the directions
    11. I opened up GHS-1194 to see if Atlassian can help with the missing Compact (Kanban) menu selection.
    12. Going back into the server after Atlassian commented on my ticket, I found that the missing menu selection was available.  I'm guessing I had to reboot the server to make everything appear.
    13. I was finally able to configure the constraints as described in the document.  Multiple columns can be constrained so I just followed the example and did one.
    14. I should comment that while I was waiting for Atlassian to get back to me, I created 5 stories for Sprint 1 of the project.  I'm not sure if that, plus the server reboot, caused the Kanban menu selection to appear but I wanted to note it none the less.

    Tuesday, July 20, 2010

    Atlassian Stack: Crowd and Jira Integration

    1. Followed the Crowd Administration Guide, specifically the section on adding an application
    2. create a Crowd directory for Jira. I called mine Jira
    3. create three groups in Crowd: jira-users, jira-developers, jira-administrators and add them to the Jira directory
    4. create at least one user who is in all 3 groups
    5. define the Jira application in crowd following the instructions provided
    6. copy the Crowd client libraries into the Jira application. I stopped the jira server.  Removed the existing crowd client jar, copied the new one from /opt/crowd/client and set the permissions so that the file was owned by the jira user and group.
    7. replace Jira's cache configuration file in /opt/jira/atlassian-jira/WEB-INF/classes crowd-ehcache.xml with the one in /opt/crowd/client/conf and reset the file ownership
    8. edit /opt/jira/atlassian-jira/WEB-INF/classes/osuser.xml and add the Crowd specific section described in the instructions
    9. edit /opt/jira/atlassian-jira/WEB-INF/classes/seraph-config.xml as instructed
    10. start jira
    11. log into Jira using the new Crowd user
    12. enable external user management as instructed
    Done. A lot of steps but Crowd now controls user accounts.  Next up: GreenHopper configuration.

    Atlassian Stack: GreenHopper

    1. grab a copy of GreenHopper, standalone version
    2. follow the installation guide
    3. shutdown jira - sudo service jira stop
    4. copy the GreenHopper jar to /opt/jira/plugins/installed-plugins. I also had to change permissions on the file to be owned by the jira user and group.
    5. start jira - sudo service jira start
    6. hit http://localhost:8080/ and log in as the administrator
    7. follow the directions for entering in the GreenHopper license
    That is it.  Tomorrow I'll configure GreenHopper because I intend to use it to help track the installation of the rest of the stack.

    Atlassian Stack: Jira (autostart)

    I found these directions for configuring Jira to auto-start.  The main recipe is for the old System V mechanism but a reader left a note on how to set things up for Upstart, which is what my Ubuntu installation uses.  I'll use his configuration file which I dropped into /etc/init.  All I did was edit his configuration to match my directory names, rebooted and then verified that I could hit http://localhost:8080/.  Worked like a champ.  Onto the next application.

    Monday, July 19, 2010

    Atlassian Stack: Jira

    Next up is Jira, the task/bug tracking portion of the stack.

    1. Grabbed the standalone version and unpacked into the /opt directory.  
    2. Made a soft link to the install so upgrades will be a bit easier: ln -s atlassian-jira-enterprise-4.1.2-standalone jra
    3. edit /opt/jira/atlassian-jira/WEB-INF/classes/jira-application.properties so that jira.home points /opt/jira
    4. used Webmin to create a user just for the Jira process.  I called mine, wait for it...jira.
    5. changed ownership of the jira installation directory to be owned by the jira user
    6. start up jira as the jira user: sudo -u jira /opt/jira/bin/startup.sh
    7. I got a PermGen switch warning which pointed me to the troubleshooting guide.  Decided not to worry about it until I actually do run out of PermGen space
    8. I realized that the instructions did not walk me through how to connect Jira to my database so I shutdown the instance and followed those directions.
    9. using Webmin, I created a new db user named: jirauser and gave it all permissions
    10. using Webmin, created a new database name jira making sure to set it to use UTF-8 encodings
    11. copy the MySQL JDBC driver to the /opt/jira/lib directory.  Once I did that, I saw that a JDBC driver already existed in that directory, although a slightly older one, so I removed mine and left things as is.
    12. run the Jira configuration tool: sudo -u jira /opt/jira/bin/config.sh .  I had a problem doing this because the jira account isn't meant for logging in and doesn't have an environment.  To work around this, I reset the permissions on the installation directory to my account and re-ran the tool.  Once I made the proper configuration choices, I reset the files back to being owned by the jira account.
    13. started jira again using the command in step 6
    14. connected to Jira via the web browser: http://localhost:8080/
    15. ran through the configuration wizard. Decided to not accept the default directories and use /opt/jira-home instead.  Figured I wouldn't lose data to upgrades if I did a stand alone directory.
    That's it.  Seems to work.  There are many configuration options that I'll need to plow through but I'll have to first learn how to get Jira to autostart at system bootup, like I did for Crowd.

    Sunday, July 18, 2010

    Atlassian Stack: Crowd (auto-start)

    One final tweak I need to make to Crowd is to get it to start automatically at system start up.  I'm using the Setting Crowd to Run Automatically and Use an Unprivileged System User on UNIX directions to make that happen.  Here are the basics of the steps I ran:
    1. using Webmin, create a crowd user and a crowd group, making sure to not allow the crowd user the ability to actually log in
    2. changed ownership of the crowd installation directory and the crow-home directory so that they they were owned by the new crowd user and crowd group: sudo chown -R crowd:crowd /opt/atlassian-crowd-2.0.6/ and sudo chown -R crowd:crowd /opt/crowd-home/ (the provided script in the instructions did not appear to change all the permissions needed).
    3. used the /etc/init/crowd.conf provided by a reader in the notes (current Ubuntu uses a slightly different startup/shutdown mechanism: upstart) instead of the Sys V script provided in the doc
    4. issued sudo service crowd start from the command line to start up crowd
    5. verified crowd had started by hitting http://localhost:8095/crowd/

    Saturday, July 17, 2010

    Java Testing and Design - 2.4 The Web Rubric

    As with many things, clearly stated objectives and criteria are useful in determining the health of your system.  A rubric has certain advantages:

    1. assessment is more objective and consistent
    2. helps the testers focus on clarifying criteria into specific terms
    3. clearly describes to the developer how her work will be evaluated and what is expected
    4. provides benchmarks against which the developers can measure progress
    Here is an example ruberic:

    Criteria assessed through system useLevel 1 BeginningLevel 2 DevelopingLevel 3 StandardLevel 4 Above Standard
    Basic features are functioningFew features work correctly the first time usedMany features do not operate. Some missing features required to complete workMost features operate. Workarounds available to complete work.All features work correctly every time they are used.
    Speed of operationsMany features never completed.Most features completed before user lost interest.Most features completed in 3 seconds or less.All features completed in 3 seconds or less.
    Correct operationFew features result successfully without an error condition.Some features end with an error condition.Most features complete successfully.All features complete successfully.

    Ruberics can take many forms but typically offer the following:

    1. focus on measuring stated objectives (performance, behavior or quality)
    2. use a range to rate performance
    3. contain specific performance characteristics arranged in levels indicating the degree to which a standard has been met.
    A ruberic's job is to help remove subjectivity when assessing the health and quality of the application.

    Friday, July 16, 2010

    Java Testing and Design - 2.3 Web-Enabled Application Measurement Tools

    The best measurements of health for a web application include:

    1. meantime between failures in seconds
    2. amount of time, in seconds, for each user session, sometimes knows as a transaction
    3. application availability and peak usage periods
    4. which media elements are most used (HTML, Flash, JavaScript, etc.)
    Developing criteria based on these elements is difficult but author promises to offer help in upcoming sections.  One question I had when I read this section is how to accurately determine what a failure is?  If you are trolling through server logs looking for error messages, you are likely to find lots of them.  What do I have to do in my application to make it easier to identify a real problem from a benign one?

    Thursday, July 15, 2010

    Atlassian Stack: Crowd

    I finally got off my lazy butt and shelled out a couple dollars and purchased starter licenses for the entire Atlassian stack.  I've been a user of a few of their components for a few months now and liked what I've seen so far.  I'm of the opinion that most of the power comes when everything is fully integrated.  I'm always writing Java programs, mostly to learn new frameworks, but have come up with an idea for something on a larger scale.  I'm thinking I can use the Atlassian tools to help guide me through the specification and development process.  I'll blog my journey from installation to day-to-day use.  Wish me luck.

    I've used the free, personal-use licenses that Atlassian used to provide and always ran into a problem with their licensing mechanism: it is tied to a machine.  If I wiped my laptop to install the latest version of Ubuntu, I would have to contact Atlassian in order to get a new key.  Hoping to avoid that pain, as well as providing a "time machine" I'll be installing the stack into a virtual Ubuntu server machine based on Virtual Box.  Here are the basic steps I've used to get started:

    1. created a 20 GB Virtual Box VM based on Ubuntu 10.04 Server edition
    2. added the Webmin repository and installed Webmin.  The box will be headless and I'll need a tool to help me manage the box remotely.
    3. updated the box so it is patched to current levels
    4. downloaded and installed the current Sun Java 6 JDK taking care to export both JAVA_HOME and JDK_HOME in the /etc/environment script.  In that script, I also appended the path to point to the JDK bin directory.
    5. installed Virtual Box Guest Additions using these instructions 
    6. using Webmin, installed mysql-server and mysql-client  (I cheated and looked ahead at the installation directions and knew that a database was needed)
     That gives me a basic Java enabled machine to work with.  The plan is to take a snapshot prior to the installation of new server giving me the chance to rollback in case something really goes wrong.

    Crowd is a single-signon solution and I figured it makes sense to be the first service to install.  The documentation says that deploying their applications into a single Apache Tomcat instance is not supported so I'll use their standalone versions instead.


    1. download and unzip Crowd.  I placed mine in /opt
    2. created /opt/crowd-home and edited /opt/crowd/crowd-webapp/WEB-INF/classes/crowd-init.properties to point to it
    3. using Webmin, I created a MySQL user named crowduser and gave the account all db rights
    4. using Webmin, I created the Crowd database, named crowd, taking care to specify UTF-8 as the character encoding
    5. changed the isolation level on MySQL: sudo vi /etc/mysql/my.cnf and added transaction-isolation = READ-COMMITTED  into the [mysqld] section
    6. using Webmin, restarted MySQL so the setting would take affect/li>
    7. download the JDBC driver and copy the JAR file to /opt/crowd/apache-tomcat/lib
    8. start crowd via start_crowd.sh
    9. connected to Crowd via http://localhost:8095/crowd
    10. followed the steps in the setup wizard
    That was it.  I have no idea how to set up the application but a quick glance at the admin screens makes it look at though adding users and roles will be pretty simple.

      Java Testing and Design - 2.2 Defining Criteria for Good Web Performance

      The author asserts that three criteria must be checked if you are to determine if your web application is healthy:

      • are the features working?
      • is the performance acceptable?
      • how often does it fail?
      Create a short list of the most important features of your application and create automated tests for them.  Consult real users and find out how long they will wait for a feature before abandoning it and moving on to something else.  Consult your server logs to find out exactly how often the system fails. Once you know that, you decide if the failure rate is too high for your situation.

      Wednesday, July 14, 2010

      Java Testing and Design - 2.1 Just What Are Criteria?

      Web-enabled applications need to be constantly tested, monitored and watched, primarily due to the fact that the user's expectations of the application will change over time.  At first, they might care that the application worked.  Later, they might care about how fast it works?  Even later, they might care if it does exactly what they need.  You can use test agents to test your application daily.  If you build your agents based on archetypes you can gather meaningful data that can help you gauge whether or not your application is meeting your user's needs.

      The more time spent on defining archetypes, the easier it will be for your team to understand exactly what the system needs to do.

      1. Archetypes make an emotional connection between the goals of a prototypical user of the application and the software team.  Team members will have an easy-to-understand example of a person that prototypically will use your application.  Rather than discussing individual features in the application, the team will be able to refer to functions that the archetype will want to regularly use to accomplish their personal goals.
      2. Archetypes bring discussions of feature, form, and function in the application from a vague general topic to a focused user goal-oriented discussion.
      3. Archetypes give the team an understanding of where new functions, bugs, and problems need to be prioritized in the overall schedule.  
      Archetypes make it easier to know what to test and why because the test agents, modeled after the archetypes, will be testing in a way that is similar to your actual users.  Archetypes can also be helpful in identifying when performance becomes a problem.

      Tuesday, July 13, 2010

      Java Testing and Design - 1.5 Testing Methods

      This section describes the following testing methods:

      • click-stream testing
      • unit testing (state, boundary, error, privilege)
      • functional system testing (transactions, end-to-end tests)
      • scalability and performance testing (load tests)
      • quality of service testing
      Click-stream testing is when you keep track of the clicks a user performs as she navigates your sites.  The thinking is that more clicks equals more advertising revenue.  This type of testing doesn't tell you if the user was able to achieve his goal when visiting your site and isn't very useful when trying to determine the quality of your software.

      Unit testing is when a developer tests an individual software module by providing it various inputs and comparing its against expected outputs.  Although useful to developers, module based testing doesn't tell you how useful or healthy the entire web application is.  Don't rely solely on unit tests when assessing your system's health.

      Functional system testing takes the perspective closer to that of the end-user in that they check the entire application from end-to-end.  A system test will start at the browser, travel through the web servers, individual software modules, the database and then back.  When system testing, you have to understand your system well enough to know exactly what front-end actions trigger interactions between the various back-end components.  Bringing up a page which is a simple static HTML isn't much of a test because it doesn't trigger software modules to fire that access the database and other integration points.

      Scalability and Performance testing tries to understand how the system behaves with concurrent users accessing the system.   Typically, the functional tests can be used to drive the system and obtain loaded system performance metrics.

      Quality of Service testing attempts to show that the application is meeting service level agreements.  Functional tests should be used to monitor the web application's health.  Done over long periods of time, you can get a picture of what is normal and what is not.

      A test agent is a framework or process that allows for automation of the system's test.  Typically, an agent must provide the following:

      • checklist - defines the conditions and states of the test, such as being able to log into the system
      • test process - lists the system transactions that are needed to perform the checklist
      • reporting method - records the results after the process and checklist are completed
      The more automated a testing agent is the easier it is to certify the health of the application.

      You can use your automated test agent to test for scalability and performance.  Scalability describes the system's ability to serve users under varying levels of load.  Run your tests when the system has 1, 100, 500 and 1000 concurrent users and measure how long the tests take to respond and you have an idea of how scalable your system is.  Performance measures the system's ability to deliver accurate results under load.  If the system responds quickly (scales) but returns incorrect results or errors it isn't performing very well.  Taken together, performance and scalability can tell you how your system behaves under load.

      Creating functional tests can be difficult because it can be difficult to know exactly how the tests should interact with the system.  One idea to try are Archetypes.  Construct several testing "personalities" which describe different types of system users and capture the way they interact with the system.  For example, a field agent might be somebody who does lots of reads and relies heavily on the querying capabilities of the system.  A manager might be more of a power user who uses the reporting portion of the system to generate his monthly status reports to his boss.  The more detailed archetype you can create the better your test agents will be and the better tested your system will be.  In practice, automation of the test agents is the only practical way of running the tests in any consistent and meaningful way.

      Monday, July 12, 2010

      Java Testing and Design - 1.4 A Concise History of Software Development

      This chapter shows how computer programming grew from very small and tightly controlled environments to the large, free-for-all that is today's web environment.  The interesting part of the discussion revolve around the modern software development processes.  A flow that the author has termed The Internet Software Development Lifecycle looks like this:

      1. Specify the program from a mock-up of a Web site
      2. Write the software
      3. Unit test the application
      4. Fix the problems found in the unit test
      5. Internal employees test the application
      6. Fix the problems found
      7. Publish software to the Internet
      8. Rapidly add minor bug fixes to the live servers
      The web-style of software engineering makes use of many interconnected systems living all over the network. The same decentralization can also be applied the software components themselves as many parts of an application are implemented using shared software.  The author calls to our attention the current "experiment" with open source software development and distribution as well as agile programming.

      Next up, Testing Methods...

      Sunday, July 11, 2010

      Java Testing and Design - 1.3 Why Writing High-Quality Software Today Is Hard

      In this section, the author offers 5 prime reasons why creating software is difficult:

      1. The Myth of Version 2.0 Solidity - this is when people believe that scrapping the 1.0 product and starting from scratch will create a superior product.  A rewrite is a major overhaul and in the author's experience a more moderate approach has proven its worth.  There is more stress to get the rewrite to the same state that the 1.0 is at and there is no "down time" for developers to recharge their batteries.  A smaller, iterative technique allows for constant improvements without the burnout.  
      2. Management’s Quest for Grail - this is when management lays down the law and says all projects and all teams shall use methodology X.  The author contends that there are different programming styles which require different degrees of architectural direction: Users, Super Users, Administrators, Script Writers, Procedural Programmers, Object Programmers, Interoperability Programmers and Orchestration Programmers.   A team must select a methodology that matches the ability and style of the team members.  Forcing a team of Script Writers to adopt SCRUM might not be a wise decision.
      3. Trying for Homogeneity When Heterogeneity Rules - trying to standardize the entire company on a single platform is not practical because you need to exorcise the "impure" parts of the existing solutions, which never happens in practice.    Solutions were built upon technologies that made sense at the time and provide useful functionality so deal with it.
      4. The Language of Bugs - instead of using bug numbers as the vocabulary to describe if a system is working correctly, why not use a test agent that is modeled after a single user?  Choose just one user, understand their goals, and learn what steps they expect to use. Then model a test against the single user. The better understood that individual user’s needs are, the more valuable your test agent will be.
      5. The Evil Twin Vice Presidents Problem - this is the scenario where development and test is run by one person and the data center is run by another.  This promotes sniping between the two groups where cooperation would be more productive.  The author recommends providing a common testing framework that is used by development and test to run functional, scalability and performance tests.  The IT guys can leverage that framework to verify the health of the system and provide QoS reports. 

      Saturday, July 10, 2010

      Java Testing and Design - 1.2 Web-Enabled Applications, the New Frontier

      In this section, the author describes how software has been architected in the past.  The interesting point is how he sees the new emerging architecture, namely one where the desktop contains both presentation and some business logic but much of the business logic lies elsewhere, out on the web somewhere where it accessed by RESTful and SOAP APIs.  He believes that the driving force behind the new architecture are a set of "firsts":

      • This is the first time in the information technology (IT) industry that software developers, QA technicians, and IT managers agree on interoperability architecture.
      • This is the first time that developers, QA technicians, and IT managers agree that security needs to be federated.
      • This is the first time that developers, QA technicians, and IT managers agree that scalability in every application is critical to achieving our goals.
      • This is the first time that government regulation is part of our software application requirements.
      • This is the first time that interoperability tools and technology is delivered with integrated development environment (IDE) tools
      • This is the first time that Java and other interoperable development environments and open source projects have reduced operating system and platform dependencies to a point where, in many cases, it really does not matter any more.
      In summary, now that we have standards, tooling and infrastructure that allow applications to inter-operate, so we'll be doing lots more of it.  I use a nifty little program called XBMC to serve up media files in my home.  One of its nicest features is its ability to show me information about the movie, TV show or song I'm thinking about playing.  It can do this because it can use the web, and its standard protocols, to mine the internet for information.  XBMC wouldn't be nearly as useful if it couldn't leverage the  data contained in other applications.

      Friday, July 9, 2010

      Java Testing and Design - 1.1 The First Three Axioms

      I'll be reading Java Testing and Design: From Unit Testing to Automated Web Tests by Frank Cohen and summarizing the chapters.  After skimming it I think it'll be an interesting read.

      The author briefly describes his first Basic program and progresses to a more modern programming language.  The upshot is that he learned the three axioms of development:

      1. even though a program contains only a single line of code, it needs to be maintained
      2. every piece of software inter-operates with other software
      3. there is more to delivering high-quality software than writing code
        It is these axioms that help the author to deliver quality, web-enabled Java software.  Some of that software is TestMaker, an open-source test automation utility for which the author is the architect.  Hopefully, these axioms will allow the author to show us some techniques for improving our ability to write better software.

      Thursday, July 8, 2010

      Prefactoring: Plan Globally, Develop Locally

      Plan Globally, Develop Locally. Incremental implementations should fit into a global plan.

      The main idea here is that as you go through your use cases and iterations, take few moments examine your objects to see if their structure makes sense, in broad terms, with the existing use cases as well as the planned ones.  You can't know everything you'll need to know before you start typing code but you can look at your planned implementation for the sprint to make sure you won't have to do a major refactoring later on.  If possible, try and keep changes as small as possible.  Changing the implementation of a class is less painful than restructuring an entire inheritance hierarchy. 

      Wednesday, July 7, 2010

      Prefactoring: Don't Repeat Yourself (DRY)

      Don't Repeat Yourself (DRY). Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      This is a really powerful idea to grasp and sometimes very hard to adhere to. Working on systems where you have to remember to change 4 different files just to make a single logical change is painful and error prone.  If you know you want to stay DRY, and who doesn't, then you start to see your solutions in a different light.  Sometimes you have to write a custom tool that churns through an XML file to generate the multiple pieces the system needs to support a new feature but at least you have a canonical definition of that feature.  One tool that might help to stay DRY is MPS.  You can create a custom language to describe the solution to a problem, such as generating Hibernate and JAXB aware Java classes from a single definition file.  Having to look multiple places to make sure a change gets through is going to cause a failure in the system at some point.

      Tuesday, July 6, 2010

      Prefactoring: Document Your Assumptions and Your Decisions

      Document Your Assumptions and Your Decisions.  Keep a journal for retrospectives.

      The human memory isn't very good -- at least mine isn't -- so it makes sense to keep a journal of the design decisions you've made as you've implemented the system.  The code will tell you what decision you made, but often times it won't tell you why you made that choice.  As the system evolves, those whys can become important as new decisions need to be made.  Did you assume that the system would be operating on a fast, reliable network?  Did you assume that nobody would ever want to manipulate that particular sub-system so no RESTful API was put in place?  Having a search capable repository of your decisions is useful in retrospectives as well as answering the age old question of "why did we do that again?"  I use a wiki to keep my notes, mainly because it is easy to use and is easily searched.

      Monday, July 5, 2010

      Prefactoring: Think About the Big Picture

      Think About the Big Picture: Decisions within a system should be congruent with the big picture.  What does that mean? It is all about context, with the context being the system's architecture and business purpose.  The definition of architecture that I've come to love is "important things that are expensive to change".  For some systems, the message format could be considered part of the architecture because it spikes through all layers of the system.  For other systems the use of the Java EE stack would probably be part of the architecture because it would be a lot of work to shift to a POJO model.  If you can somehow get the entire team to share a vision of the architecture and make sure the important decisions are made based on the context of the architecture and its primary purpose, the project has a better chance of succeeding.

      Sunday, July 4, 2010

      Prefactoring: Don't Reinvent the Wheel

      Today, I'd like to begin summarizing some points from the book Prefactoring: Extreme Abstraction, Extreme Separation, Extreme Readability by Ken Pugh.  The book is organized as a series of stories that highlight a particular guideline.  Each post will focus on a single guideline.  Today's is Don't Reinvent the Wheel: look for existing solutions to problems before creating new solutions.

      As a developer, I'm as guilty as the next guy of Not Invented Here syndrome but I've come to appreciate the fact that if I'm going to get paid, I need to work on solving the problem and not creating a shiny new tool that I think will help me to solve the problem.  Sometimes I can grab an entire product from the open source community and solve the problem but often I have to integrate many pieces together in order to get things done.  For me, I tend to gravitate to the Spring portfolio of solutions.  I'm comfortable with the programming model and they tend to stay out of my way and let me focus on the problem at hand.  Most importantly, they free me up from having to write yet another custom configuration file parser or MVC web framework.  The faster I can create a solution the more I stay interested and the happy my boss is.

      Saturday, July 3, 2010

      Ship It! - Summary

      I really enjoyed reading this book.  It isn't huge tome and makes you think.  I like the fact that they an a la carte way of looking at process.  I think that the discussion around Tracer Bullet Development to be particularly interesting.  I think this book is worth the time and money so go out and buy yourself a copy.

      Friday, July 2, 2010

      Ship It! - 5.18 We're Never Done

      How do you decide what to do? Use The List to break down the product into individual features. You’ll know your project is finished when all the features on The List are implemented and are working well together (of course, feature integration should be one of the tasks on The List!).

      Feature list guidelines:
      • Break down any item with a time estimate of more than one week into subtasks. It’s okay to have a top-level task that takes weeks or months, but that estimate is just a guess unless you back it up with estimates for its subtasks.
      • Any item that takes less than one day is too small. If an item can be scheduled for a time period shorter than a day, that item is probably too low-level for The List.
      • A single customer example (or use case or scenario) can involve multiple features in The List. Don’t try to force an entire example into a low-level item in The List; break it into subtasks.
      • Add priorities to the items on The List, then stick with them. Don’t work on a priority-two item while there are priority-one items that are unfinished. But feel free to change those priorities as it becomes necessary.
      • Assign specific people to each feature on The List. You can do it on the fly (as one person finishes a task, they “sign up” for another), or you can assign them all up front, and then change them as necessary. It depends on how your team works best.
      • Be flexible. Use change to your advantage. Changing The List means that you’re improving and refining it, that you’re getting customer feedback, and that you’re matching The List with the real needs of your customers.
      Tip 40: The list is a living document.  Change is life.

      Don’t just remove an item from The List after it’s finished. Keep a copy of completed features (along with dates, priority, and assigned developer) to use as your audit trail. When you’re asked, “Why didn’t feature X go in?” or “What features did go in?” The List will give you the answers.

      Tip 41: If it's not on The List, it's not part of the project.

      Tip 42: Always give feedback fast

      Thursday, July 1, 2010

      Ship It! - 5.17 Features Keep Creeping In

      When someone requests a new feature, look at the tasks your team is currently implementing. Is there time to implement the new feature before the next delivery? Will the new feature work with the already-scheduled features? Will it even make sense in the context of the planned product?If the answer to these questions is “yes,” then by all means add the feature to The List. Decide how important it is compared to the other features, and assign it a priority and a developer to implement it.However, if the answer to one or more of these questions is “no,” then don’t add the feature, regardless of the pressure you get from the requester. Instead, gently but firmly use The List to explain why you won’t implement the feature.

      Wednesday, June 30, 2010

      Ship It! - 5.16 We're on a "Death March" Project

      First, create a new project schedule. Use The List, and put time estimates on each item. Be sure these are realistic time estimates, not Death March estimates. Work with your tech lead to get the priorities correct and in line with management. Second, with the time estimates on The List, put together a time line for the project. Publicize this schedule. Put it on your white board or your web site. Often schedule makers create time lines simply because they don’t understand the work involved. Help them understand. Once you have a better idea of how much work you can reasonably accomplish, show the time line to your manager. Tell them you think the project is in trouble. Management may not be happy with your predictions, but happiness isn’t the goal. Try to show them that if the schedule doesn’t match reality, the schedule can’t be met (although in some situations, it may be decided to keep the bad schedule for other reasons). Now at this point, you’ve got two choices: move the date or drop the features. If you decide to move the date, keep an active copy of The List so that nobody can add features without adjusting the time line again.

      Tuesday, June 29, 2010

      Ship It! - 5.15 We're Junior Developers, With No Mentor

      First, get your team lead or manager to start holding daily meetings with your team. Second, introduce code reviews, and make sure you or one of the other seniors attend each review.

      Monday, June 28, 2010

      Ship It! - 5.14 There's No Automated Testing

      The biggest complaint that people have about automated tests is that maintaining them is too much trouble. Before you start writing and committing tests, be sure that you have a CI system in place. Set it up to run the tests every time the code changes. If you have test code that you’ve been running manually, port it to the CI system. Write your tests using mock client testing to get the maximum return for each test. It’s too late for you to write a unit test for every method in your code. Mock client tests are more efficient because a single test exercises a lot of code. Identify the tests to write using defect-driven testing This will let you add tests where they can do the most good. Only add a test if there is an active bug there right now. This means that any test you add will address the most current issues in your code, and you’ll get the maximum possible benefit out of every test.

      Tip 39: Test where the bugs are

      Getting people to use sound engineering practices even for silly old test code is important.  Brittle tests are more likely to get turned off -- which helps nobody.

      Sunday, June 27, 2010

      Ship It! - 5.13 The New Practice Didn't Help

      When Not to Introduce a Practice
      Don’t try to introduce a new practice or process if there isn’t a problem that needs fixing. Never introduce a change because it’s “the right thing to do.” Instead, identify the actual problems in your shop, and then figure out how to fix them.

      Tip 34: Only fix what needs fixing

      Tip 35: Disruptive "best practices" aren't!

      And of course, make sure the practice or process you’re considering will actually improve things. If it won’t make things run faster and more efficiently, your team won’t (and shouldn’t!) adopt it.

      How to Introduce a New Practice
      There are two main things you’ve got to do: demonstrate and persuade. You’ve got to get buy-in from several groups of people. First and foremost are the people who will actually be practicing the new practice, namely, your team. If they aren’t excited about it, it won’t matter if anyone else is.

      Tip 36: Innovate from the bottom up

      Show them the process or tool; don’t just tell them about it. In particular, show them how well it works, especially in comparison with the old way of doing things. The key is to prove that this new-fangled idea is everything you say it is.

      Tip 37: Show, don't just tell

      Tip 38: Cultivate management buy-in

      I've personally done the show-don't-tell tactic and it works.

      Saturday, June 26, 2010

      Ship It! - 5.12 Can't Get "Buy-In" on Essential Points

      Sell the new practice or process to the team; don’t just dictate policy. Don’t just preach it; demonstrate it. People respond to a working example more often than to a lecture, no matter how good the concept. Make it easy for your team to change. Give them extra time to come up to speed on the new practice. Get them books or training if they need it. And choose the right time to introduce the new practice or process. Show your team how these practices benefit them personally.

      I always had more respect for the lead-by-example type of person.

      Friday, June 25, 2010

      Ship It! - 5.11 Team Doesn't Work Well Together

      • If you don’t already have them, start daily meetings or get your manager to do so.
      • Have team members review each other’s code before commits.
      • Meet once a week for lunch.

      Thursday, June 24, 2010

      Ship It! - 5.10 Your Manager Is Unhappy

      How do you keep your manager happy? It comes down to one word: communication.  Make sure your manager always understands what you are doing and why.
      • Use The List and get your manager to review it
      • Keep your manager up-to-date with your progress 
      Tip 32: Publicize what you're doing and why

      Wednesday, June 23, 2010

      Ship It! - 5.9 You've Got a Rogue Developer

      This particular problem can be solved only by the manager.

      A Manager can:
      • Use daily meetings to correct a rogue developer’s course.
      • Hold the rogue developer to the tasks on The List.
      • Use code reviews and automatic code change notifications to track
        a rogue developer’s work.
      • Use CI as a last resort to monitor a rogue developer’s work.

      Tuesday, June 22, 2010

      Ship It! - 5.8 Customers Are Unhappy

      Get the customer to work alongside you to help define the product while managing their expectations about what’s possible and what’s not. Don’t just get their input at the beginning of the project; keep in touch throughout the development process. So how do you communicate the project status to the customer? One of the best ways is to create a live system that can be used for demonstrations as early as possible, even if the feature set is incomplete. This is true whether the product has a GUI or is a set of application programming interfaces (APIs). Every time your team completes a new feature, add it to the demo program so the customer can see it and give you feedback.

      Tip 31: Deliver live demos early and often

      How do you communicate with the customer between releases of the demo? Encourage the customer to check The List as often as possible, and keep it up-to-date so they can see it change. Invite the customer into your development world so that they can see your direction and your problems. A customer is more likely to accept a delay if they understand what caused it. Also, encourage them to suggest changes to The List. If they don’t like the way the project is going, they should ask for you to change it. After all, they’re paying the bills. If they frequently communicate their needs and your team continually adjusts the product to meet them, then the end result should make everyone happy.

      Monday, June 21, 2010

      Ship It! - 5.7 Can't Build the Product Reliably

      Solution: understand the build and then script it. Besides increasing reliability, automated builds add a professional finish to your product, ease the transition for new team members, and let you easily use a CI system.

      Sunday, June 20, 2010

      Ship It! - 5.6 It Hurts When I Integrate Code

      How do you keep code integration from becoming an ongoing nightmare? The solution is simple: integrate your code more often to reduce collisions and simplify merges. Use a CI system to verify that things still work.
       
      Tip 30: Integrate often, and build and test continuously

      I used to work with a guy who synced his source area once a month and created headaches for both himself and the whole team.  That is when I learned to do frequent, small integrations.

      Saturday, June 19, 2010

      Ship It! - 5.5 But It Works for Me!

      If you can’t reproduce a bug on your own machine, the build machine is your insurance policy. Use it to verify bugs that you can’t reproduce on your workstation. Once you duplicate the bug on the build machine, figure out what’s different on your system. Once you’ve duplicated the bug, craft a test to expose it, and then run the test on the clean box. If you write the test carefully, you can even send it into the field to see if the customer can duplicate your results. Of course, if you can not reproduce the bug on the build machine either, then perhaps the customer has a configuration problem on their own machine. Start asking questions about the environment they’ve set up.

      Tip 29: It has to work for everyone

      I think it is irresponsible and lazy for developer to give up and say "it works on my machine" but I see it all the time.

      Friday, June 18, 2010

      Ship It! - 5.4 Tests? We Stopped Using Them

      When that happens, it’s time to start again. Perhaps no one was using the tests because they weren’t getting benefit from them. Normally the benefit is immediate and obvious, but there is an upfront cost to get started (or restarted) that you just have to put up with. The other problem you may face comes later in the project’s life. The more tests you add, the more time it takes to run them after a compile. At some point, you may not be able to run all the tests every time. Which tests should you pick to run continuously? Choose those tests that best exercise the code that is being actively developed. When you notice that a test is broken in nightly or weekly builds, move it to the CI test suite so it can be run more frequently.

      Tip 28: Continuously test changing code

      Until you have a CI environment that builds your product after each code change, you won’t be testing your code often enough. Even a daily build is not sufficient.

      I've found that some developers, myself included,  don't use the same care when writing test code that they use when writing production code.  This tends to make the tests brittle and people get pissed when the test code breaks due to some sort of interface change.  It takes some effort but if you can avoid cranking out tests using cut-n-paste then your tests are easier to fix if something changes on you.

      Thursday, June 17, 2010

      Ship It! - 5.3 Features Keep Breaking

      What’s the quickest way to fix this problem? Add an automated test suite. A mock client test suite is your best choice to get the product or platform stable as quickly as possible. Mock client tests exercise the most lines of code in the least amount of time.

      Tip 27: Mock client tests do the most with the least

      Not much to say here other than I know this works, but takes time and effort.

      Wednesday, June 16, 2010

      Ship It! - 5.2 Testing Untestable Code

      Tell your manager you want to start test driven refactoring. You want to start adding the simple hooks to your application that make it possible (or feasible) to do good automated testing. First, create (or borrow) a test plan for the product. Keep the plan simple at first. Don’t try to make it perfect on your first attempt. Shoot for a simple pass that exercises basic functionality. Second, write an automated test. Make the test go as far as you can. When you get stuck, what’s the least amount you can add to the product’s code to get the test running? Is it just a return code to an existing API or an “id” tag for an HTML field? Don’t try to fix every problem with your first pass. Don’t try to add “id” tags to every page in your product; shoot for the one page you need to get the test passing. If your test plan hits a roadblock, don’t stop. Remove the part of the test plan you can’t test and move on. Remember that you can also add the hard part to your next test plan.

      Tip 26: Use test driven refactoring to clean up untestable code

      Tuesday, June 15, 2010

      Ship It! - 5.1 Help! I've Inherited Legacy Code

      How to tackle this problem:
      • build it - First, figure out how to build it, and then script that build process. After that, it should be easy to automate the builds.
      • automate it -Your goal is to automatically build and test the entire product on a clean machine with an absolute minimum of manual
        intervention. Document all the build steps and make this documentation publicly available.
      • test it - Figure out what the code does, then begin testing by writing mock client tests for it. Once you have the project building cleanly, you’ll want to confirm that it works. In order to write the tests, you’ll have to learn exactly what the product is supposed to do (no surprise there). Mock client tests are a good starting point: they test the broad functionality of a product because they act just as a product user would.
      • test it more - Figure out the product’s innards (things such as structure, flow-of-control, performance, and scalability), and write more tests for it. Unless the product is completely unused, there will be bugs you’ll need to fix (or at least document) or enhancements you’ll have to make. Normally these changes are a pretty scary thing to do to legacy code, because you’re never quite sure what you’re going to affect when you make a code change. But you can do this fearlessly because of the mock client tests you wrote as a safety net; they’ll keep you from breaking things too badly. Write a new test for every bug you fix and for every enhancement you add to the product. The type of test you write will depend on what you’re changing (e.g., a unit test for a low-level internal change, or a mock client test for a new feature). At this point you’re treating the legacy product in the same way you would any other product you support.
      Tip 25: Don't change legacy code until you can test it

      I find that testing a new code base forces me to understand the system.  I suppose the added benefit of writing tests for known bugs forces me to understand the defect tracking system as well.

      Monday, June 14, 2010

      Ship It! - 4.8 Selling Tracer Bullets

      The primary benefits of TBD are:
      • teams can work in parallel
      • can see a working system earlier
      • momentum, people like to make progress
      • testing can begin just after the interfaces are defined
      • easier to move people between layers because they have a basic understanding of what each layer is supposed to do
      TBD Sequence:
      • propose system objects (layers)
      • propose interfaces
      • connect interfaces
      • add functions
      • refactor, refine, repeat
      Tip 24: You can't steer a boat unless it is moving

      The main idea behind Tip 24 is that if your development progress is slow, you won't get timely feedback making it difficult to correct course.

      How to Get Started:

      • try it out
      • discuss the concepts with the team first
      • define system objects
      • define interfaces between them
      • write the interface stubs
      • make the stubs talk with each other
      • fill in the stubs with functional code
      You Are Doing It Right If...
      • the entire system is always up and running
      • team members understand the system objects as well as "their" objects
      • team members feel comfortable helping out wiht other system objects
      • you can rewrite large portions of your code base and nothing breaks
      I really like the idea of TBD.  I've operated in a similar manner on small projects just to see how stuff would "hang together" and enjoyed the process.  You run across stumbling blocks sooner and it feels good to have a working system that you can demo at a moments notice.  I'm curious to see how it functions on a larger scale.

      Sunday, June 13, 2010

      Ship It! - 4.7 Refactor and Refine

      As your teams add real functionality to their stub code, they will discover that one interface needs an extra variable passed in or that another interface was missed entirely. You can add this functionality at any time. When you make these changes, publicize them so that other teams that use the same interfaces won’t be surprised. If need be, just add a new interface with the extra variable instead of pulling one out from under another team. Remember that you never break the builds in a tracer bullet project. Any code that you add should extend the system, not break it.  You will also realize at some point that code you’ve written needs to be thrown out. Maybe the code is too slow or it returns the wrong results. Any one of a thousand things may be wrong with it. Feel free to completely refactor or rewrite code, as long as the interface still works the same way. You can make these changes at any time in a Tracer Bullet Development project. The rule is that you can’t change the interface that the other teams use without consent, but you can change the code behind those routines at will. As long as the interface works the same way, the other teams won’t be aware that you’ve changed the code. Their code should still use your interface the same way, only now it runs faster or returns the correct data.

      Tip 23: An encapsulated architecture is a scalable architecture

      I'm thinking that the term "scalable" refers to the number of developers that can productively work on a system, not how much additional load it can handle.  If the touch points are well defined you can have one or many developers working on a layer and the other parts of the system won't know or care.

      Saturday, June 12, 2010

      Ship It! - 4.6 Fill In the Stubs with Fuctional Code

      You now have an end-to-end working system. Every piece can talk to each other, and all the code compiles and runs. It’s time for you to start making it do real work. Even as good fences make good neighbors, good interfaces make for good team interactions. Each team can now work in complete isolation if necessary and can start to fill in the logic behind each interface. Each of your teams now has a basic framework that they can begin to fill out. You can do whatever you like, as long as you don’t break the existing interfaces. No one can change APIs between layers unless both teams agree.  Don't implement the simple stuff first. Instead, target any area that contains new technology, is inherently difficult, or is core to your product.

      Tip 22: Solve the hardest problems first

      Use The List to prioritize.  Do not let the system break aka Don't leave Broken Windows

      It seems to me that in order to prevent breakage of the system, you need to have automated builds and tests in place.  If you have notifications of when people check stuff in, you can probably get an idea of what the "other guys" are doing without requiring lengthy status meetings.

      Friday, June 11, 2010

      Ship It! - 4.5 Make Your Layers Talk

      Now that you have completed your stubs, you’re ready to start making them talk to the other layers in your system. That may not sound important, but you’ll be surprised how many seemingly insignificant details will not work together the way everyone thought they would. Once you start adding callbacks, using different CORBA vendors, and trying to use Java RMI on different network subnets through firewalls and the like, it’s a whole different ballgame from “Hello, World!” As every team puts code into their stubs to access the other layers, they are proving that every technology involved actually can inter-operate. Will the customer run part of the system behind a firewall? Then you should too!

      Tip 21: If production uses it, you should too

      Be sure that your hollow shell works end to end before you invest time to make it do the real work.

      Your project now has the following:
      • a complete, documented architecture
      • a POC that shows your architecture works.  You can make a client invocation and see it run, end-to-end.
      • clear boundaries between teams
      • clear demarcations between areas of product functionality
      • experience meeting with the teams responsible for adjacent code layers. 
      Having all your teams talking with each other is a huge benefit and getting the layers to talk is one way of doing that.


      I've experienced first hand the pain that integrating layers that should "just work" can be.  I think the idea of wiring up the layers early is a good one.

      Thursday, June 10, 2010

      Ship It! - 4.4 Write the Interface Stubs

      This is the easiest part of the project. Remember to keep everything as simple as possible. The goal of an interface is to be just thin enough to compile and be used. Be sure to finish one pass at all your interfaces before you insert code that adds functionality. Resist the temptation to start coding something easy.

      I'm wondering if they advocate Test First development here?  I'm thinking that part of defining a workable interface is defining a testable interface.  Hmmm....

      Wednesday, June 9, 2010

      Ship It! - 4.3 Collaborate to Define Interfaces

      The teams working on adjacent layers meet, and together they flesh out the interfaces that their layers share. If you know that the client application needs to log in, then you know that a log in call must exist in the web server layer. The teams collaborate and come to an agreement on the method names and signatures. You then code each method but return only canned data. In these meetings, you begin to define how the layers will communicate with each other. Your teams flesh out such details among themselves so that after this meeting (or as many meetings as it takes), everyone involved knows and understands the interface points between each layer. The best architectures aren’t defined by an “architect” in an ivory tower; they are collaborative efforts. Instead of having a guru drive by and drop a completed architecture document in your lap, your team works together, leveraging and increasing everyone’s experience.

      Tips:
      • Always have a single person lead your meetings. This person always has the floor and must “give permission” before anyone can speak. Having a single person lead the meeting will help prevent the meeting from turning into a shouting match.
      • Record notes on a white board throughout the meeting. With the information on a white board, everyone can see what method signatures you’ve agreed on in real time. If you take notes on paper, inevitably someone won’t see what you’ve written.
      • Andy Hunt suggests trying using LEGOs or wooden building blocks for the objects in your system. You help the more junior members understand the system and the relationships between the different objects when you give them something tangible to see and touch. Sometimes the intangible nature of our work makes the system components difficult to visualize and understand. Whether you draw objects on the white board or move blocks on the table, have a visual or tactile representation of your system.
      • Record the interfaces and publish them. You can use a printed document, a web page, or a wiki, but regardless of what medium you use, you must make the information publicly available. The last place you want to keep secrets is an object’s interface.
      • Hold your meetings where you won’t be interrupted. You want to minimize the number of times you have to shift gears and answer questions.
      Tip 20: Architect as a group

      I'm thinking that a poor man's CASE tool, aka cheap digital camera, is a nice way to document the group's thinking.  I can't picture in my head, however, how I could use blocks to convey a software architecture.  I must do a bit of digging to see what I can find on the topic.

      Tuesday, June 8, 2010

      Ship It! - 4.2 Define Your System Objects

      Your first step is to identify the layers into which your application can be divided. You want to be careful not to define lower-level objects. Be sure that each system object you define can stand alone. If you can create an object with clean lines of separation from other parts of the system, then it can be a system object. By keeping these objects as large as you can, your teams will be able to work independently for longer stretches of time. A system object must be large enough to justify a person or team working alone for some length of time. You also must be able to create a clean line between each system object. Think of your server objects as pots and your development teams as cooks. Sure, if the pot is big enough, everyone can stir in the same pot. It’s a lot easier if everyone has their own pot. A different development team works on each layer, and each team assumes that their layer exists on a different computer. This means that all communication between layers is over the network. Prevents "cheating" by accessing the implementation directly and allows for scaling because you can move a layer to a larger machine.

      I like the idea of defining layers early but I've become comfortable of thinking in terms of the Port and Adapters architectual model.  Can that be used in TBD? I'm thinking yes because each Port is a layer in TBD.  I do think you might have to temper the "assume each layer is on a different computer" rule.  For example, many systems I've worked on had the persistence layer baked into the core.  One reason is to more easily allow for database transactions to span all the logic it needs to.  I'm not sure it makes sense to place additional complexity into the system to make transactions span processes.  

      Monday, June 7, 2010

      Ship It! - 4.1 Defining Your Process

      This chapter is about a development process based on the Tracer Bullet idea put forth in The Pragmatic Programmer: From Journeyman to Master.  I look forward to exploring their ideas.

      The practice of Tracer Bullet Development (TBD) lets you see where things are headed as soon as you start and helps you aim continuously long before you’re done.

      A process must answer the following:
      • Does it work for you?
      • Is it sustainable?
      Beware any process or methodology that claims to be the exclusive solution to every problem for all projects. This magic cure-all is just modern snake oil. Embrace a methodology that encourages periodic reevaluation and inclusion of whatever practices work well on your project. Be sure your process is a flexible one—don’t be afraid to change or adjust your process to see if a new, better practice can fit in. If you have a new idea, drop it in for a few weeks. If it works out, great! Make it a permanent addition. Otherwise, revise it or get rid of it. This experimentation is how you’ll find out what fits your shop the best. There are no sacred cows in a good process. Anything that works well can stay; anything that isn’t working must be removed or revised.

      Tip 19: The goal is software, not compliance

      How Does It Work?
      You create an end-to-end working system when you use TBD, but the system components are all hollow objects. You write code for all the big pieces of your system, but the objects aren’t doing any work. Your database access layer returns data, but it’s really returning canned data, not data from a database.

      Identify the major parts of your project, and divide your product into blocks of related functionality. For example, you might have blocks called client, web server, and database layer. Define the information that the blocks need to exchange and document the information with method signatures. These areas of layer interactions are called interfaces. Give each block to a different developer, team of developers, or corner of your mind (if you’re working solo). Write just enough code to make everything look as if it works. Think of it as an entire application of Mock Objects. With this thin, skeletal framework in place, you can begin to fill in the real logic inside each block. Finally, and most importantly remember that interfaces will change and evolve throughout the project. Your first shot will always miss the target, so be flexible and adjust your aim at any point. When another team approaches you for new interfaces, or to make changes to existing ones, go ahead and make the change. This is software, after all.

      I've done a form of this and we called it "spiking the architecture".  I found it to be useful because you can't anticipate some issues until you starting manipulating code.  I don't think "big bang" development works so TBD might be a useful tool for me.

      Sunday, June 6, 2010

      Ship It! - 3.6 Putting It All Together

      This chapter is a summary of stuff we've learned so far.

      The List
      • publicly available
      • prioritized
      • on an estimated time line
      • living
      Tech Lead
      • manages project's feature list
      • tracks developer's current tasks and status
      • helps assign priorities to each feature
      • insulates the team from external distractions
      Daily Meetings
      • keep them short
      • require specifics
      • list problems, but don't solve them
      Code Reviews:
      • small amount of code reviewed
      • one or two reviewers
      • happen frequently
      • don't publish code without a review
      Code change notifications:
      • email and publish notifications
      • list the reviewer's name
      • list purpose of the code change or addition
      • include the diff or file itself, size permitting

      Saturday, June 5, 2010

      Ship It! - 3.5 Send Code Change Notifications

      When you edit code, an automatic build system can notice the change and rebuild the project. Your next step is to publish that information so that every member of the team knows what changed. A change notification system pushes this information out to your entire shop, not just your immediate co-workers. The preferred way is to have your changes automatically emailed to each team member. Most automatic build systems will send changes for you (and they’ll usually publish your changes to a web page and RSS feeds as well). Change notifications should go out to your team each time code is checked into the source code management system.  Notifications should include:
      • Reviewer’s name.
      • Purpose of the code change or addition (for instance, which bug you’ve fixed or which feature you’ve added).
      • Difference between the new code and old code (Any major source code management system will generate this report for you.) If you’ve completely rewritten a block of code so large that it would make a diff meaningless, just include the new code. The same applies to new files.
      How to Get Started:
      • A program that watches your SCM generates notices and sends emails.
      • Make sure your team knows about the notifications before they start arriving.
      You're Doing Right If...
      • Notifications must be regular and trustworthy.
      • Don’t send out five-megabyte diffs!
      To make this work you would need to customize your SCM a bit.  You'll need something to trigger the CI server when a commit occurs.  You'll also need to put something in that verifies that the requisite data is in the check comment.  One question I have is around the volume of information that could be generated.  Changes to one branch of the code might be okay but what if there are multiple branches?

      Friday, June 4, 2010

      Ship It! - 3.4 Review All Code

      Can be painless if:
      • Only review a small amount of code.
      • There are one or two reviewers at most.
      • Review very frequently, often several times a day.
      To avoid MAD (Mighty Awful and Dreaded) reviews, separate your work into the smallest possible pieces and get each one reviewed independently and committed into the source code repository. Then if there’s a problem with any one area, it’s easily isolated. A good rule of thumb, however, is to never work for more than two days without getting a code review. Ideally there will be one review for each feature you add (or for each bug you fix).

      Tips:
      • Code reviews must involve at least one other developer. 
      • Do not make code publicly available without a review. Don’t add your changes to the source code from which your product is built until a review has been done. Part of the comments you include with the code’s check-in should list your reviewer’s name. Then, if there are questions about the reason for the code change and you’re not around, there is a second person who should be able to explain it (at least at a basic level).
      • Never use this code review rule as an excuse to not check in your code. If your company has a source code system that holds only production code, then put your code into a private area until it’s ready. This private area might be a separate code repository or another installation of the source code management system. Use whatever tools you need to get the code off your machine so that when your machine dies (as machines are prone to do), your code doesn’t die with it.
      • Reviewers maintain the right to reject code that they find unacceptable.  
      • If your code can’t be explained at a level the reviewer can understand, then the code must be simplified. As a reviewer, don’t sign off on anything you don’t understand and feel comfortable with. After all, your name is associated with this code as the reviewer.
      • Any code changes that are made can’t break any existing automated tests. Don’t waste your co-worker’s time asking for a code review if you haven’t yet run the tests. If you require existing tests to be updated, make the changes to those tests a part of the coding before the review. Any new tests that you are adding should be a part of the review as well. As a reviewer, always reject code changes if you think more tests are necessary.
      • Rotate the reviewers you use, but don’t be religious about it.
      • Keep code reviews informal.
      • When you introduce the code review process, you may need to appoint a few senior team members to be the mandatory reviewers; one of the senior team members must participate in every review at first.
      Tip 17: It's okay to say "later"

      Your management must require code reviews. If there’s no management buy-in, no one in your shop has any official motivation to participate. In other words, if no one has been told to help you, they probably won’t make time to do it, especially when deadlines are tight.

      Tip 18: Always review all code

      How to Get Started:
      • Be sure everyone understands the type of code review you have planned. Review frequently on smaller blocks of code. Don’t wait for weeks, accumulating hundreds or thousands of lines of changes. No MAD reviews for your team!
      • Have one of your senior team members sit in on each code review for the first few weeks or months. This is a great way to share knowledge and get the reviews on a solid foundation.
      • Make sure your code reviews are lightweight. It’s better to review too little code than too much. Having two overlapping reviews is better than having one larger one.
      • Introduce a code change notification system  at this time. It’s a great complement to your code reviews, and it helps to remind team members who forget to ask for reviews.
      • Make sure you have management buy-in before requiring all team members to participate. 
      You're Doing It Right If...

      • Do code reviews get an automatic approval? This shouldn’t happen unless everyone on the team is perfect.
      • Does every code review have major rewrites? If so, it indicates a problem somewhere: either with the coder, with the reviewer, or with the tech lead (who gave the directions that the coder and reviewer are using).
      • Do code reviews happen frequently? If the time between reviews is measured in weeks, you’re waiting too long.
      • Are you rotating reviewers?
      • Are you learning from the code reviews? If not, start asking more questions during your code reviews.
      Wow.  I'm thinking code reviews is an area where there will be much resistance.  Only one of my past shops did code reviews and I can't even remember if it helped quality.  I do remember, however, people getting pissed because they couldn't check in their code in time for a build due to a failed review.  They got angry because the coder disagreed with the reviewer that the code should not be allowed to go in over a "minor" issue.  I'm wondering if automation can be of any help here?  There are tools that can check for silly code style violations, like where the curly braces go, which might lessen the reviewer's load.  There are also tools like FindBugs that can detect threading issues which, again, might lighten the reviewer's load.

      One thing I'm wondering about is how to avoid checking in code without a review, especially when working with remote developers?  Do I keep my own branch that gets merged with the trunk every day?  I could check in my proposed change into the branch, have it reviewed and then merge it into the trunk if it passes inspection.  E-mail the files? Are there other techniques?

      Thursday, June 3, 2010

      Ship It! - 3.3 Coordinate and Communicate Every Day

      Each team member briefly shares what they are working on and what problems they’ve run into. A good rule of thumb is to spend no more than one to two minutes per person. Remember that this meeting has the entire team tied up, so be mindful of the burn rate; keep it short and to the point.

      Tip 16: Use daily meetings for frequent course corrections

      How to Get Started:

      • Be sure everyone knows the format (which questions you want answered).
      • Everyone must answer the questions. There are no passes, and no exceptions.
      • At first, be lenient on the time restriction. A lot of new information is exchanged in the beginning, so you must allow communication to flow freely.
      • Hold your meetings at the same time and in the same place, every day. Make daily meetings a habit, not a chore to keep track of.
      • Post topics that are discussed during daily meetings on a web page  or plog (project log).
      • Pick a person to start the meeting, and then move clockwise (or counterclockwise) through the group. Randomly picking one team member after another is more apt to make them feel ambushed. 
      You're Doing It Right If...
      • Are the meetings useful? If no one in the group is learning anything, the reports might be too terse. If more details are needed in a particular area, push those topics into a side meeting with a smaller group. However, the two-minute rule is a guideline, not a law. You may find thirty seconds is just fine, or you may need three minutes.
      • Are meetings consistently held the same time and place every day, or do they fluctuate? Having daily meetings at the same time and place makes it easy to remember. Meetings can move occasionally, but avoid mixing things up frequently.
      • If you stopped holding the meetings, would people complain? They should! The team should come to depend on the daily meeting to stay “in the loop.” If the meetings can be dismissed, then they weren’t providing value. The team should rely on the daily meeting as an invaluable resource.
      I've never participated in daily meetings but I have worked in environments where people worked in a vacuum.  I'm thinking short, daily meetings is the way to go.

      Wednesday, June 2, 2010

      Ship It! - 3.2 A Tech Leads

      Your Tech Lead both oversees and carries the technical responsibility for your software project. Having a Tech Lead frees up your manager to handle bureaucratic matters while delegating the technical aspects to someone who’s better equipped. A tech lead does the following:
      • Make sure your team’s work priorities are in line with customer needs
      • Ensure that your team’s work is properly represented to management
      • Insulate your team from non-technical upper management
      • Relay technical issues to nontechnical stakeholders
      • Present nontechnical concerns to the development team
      Tech Lead's Responsibilities include:
      • Set direction for team members
      • Orchestrate your project’s feature list
      • Prioritize your project’s features
      • Insulate your team from external distractions
      Priorities Defined:
      • P1 - Required. These are the features that you absolutely cannot ship without.
      • P2 - Very Important. You could ship the product without completing these items, but you probably won’t.
      • P3 - Nice to Have. Given time, you will complete them, but these items never delay a ship date.
      • P4 - Polish. These items add a finished feel to your product.
      • P5 - Fluff. If you have time to add “fluff” features, then you are ahead of schedule and under budget.
      Tip 15: Let a tech lead

      If you aspire to be a Tech Lead, you need to prove you’re ready to handle the additional responsibility. Look over the job requirements, and strive to live up to them. Voluntarily perform as many of the tech lead duties as you can. Don’t wait for the job to fall into your lap; demonstrate that you are trying to earn the position and can handle it well.  Use The List  for your personal work but also keep one for your team. Monitor work in progress while keeping an eye on upcoming projects.  Evaluate your team’s process. Locating the weak spots and finding practices or concepts to address problems will give you a new perspective. Don’t give up if you aren’t promoted to tech lead right away. Continue learning and growing for the next assignment. Not everyone has the temperament for a tech lead role, but working toward it gives you a broader picture of the entire project, which makes you a more productive team member. You become a better developer by thinking about and considering how you’d be a tech lead.

      If you’ve just become a tech lead, create a rough road map. Chart where the team currently stands and the direction you want them to go. What problems will you address? What work will you encourage? Make a list of all known problems. Then survey the team to see if they know about additional problems. When you think you’ve arrived at a real list, decide which items you can address and which you can’t. Daily meetings are a great way to keep track of your team’s work without smothering them.

      You're Doing It Right If...
      • Do you know what every member of your team is working on?
      • Can you generate a project status summary in less than five minutes?
      • What are the next five to ten features for your product?
      • Can you readily list the highest-priority defects for your product?
      • What was the most recent problem you cleared up for a team member?
      • Would a team member come to you if they needed an important issue resolved?

      Tuesday, June 1, 2010

      Ship It! - 3.1 Work from The List

      The List is how you set your daily and weekly agendas. You order your work with The List, as does the entire team. When you get swamped, overwhelmed, or scattered, you come back to The List and use it to regain your focus. If you get stuck on a tough problem and you need to step away for a while, The List gives you a readily available set of items to use as filler. This ensures that you’re working on the most important item, rather than the proverbial “squeaky wheel.”  Since all developers are at different skill levels, the tech lead makes exceptions as needed, but generally, no second-priority item can be touched until the first-priority items are complete. The List (as a team tool) gives management and customers something concrete to look at and evaluate the product before the time is invested adding the features.

      Tip 14: Work to The List

      Getting started with your own copy of The List is easy. First, create a list of every task you are working on (or have pending). Then, with your tech lead, assign a priority to each item. Finally, put a time estimate with each item. Don’t worry about getting the time estimates perfect the first time, you’ll improve over time.

      For the team:
      • Put every feature that you are adding to your project on a white board
      • Assign priorities to each feature. Be sure to include the proper stakeholders (management, customers, etc.) in this process.
      • Rewrite all of the features, sorted by priority
      • Attach time estimates to each item
      Until the current top priority items are completed, no one can work on the lower-priority items. This ensures that all the priority-one items are in progress before any of the priority-two tasks are touched.

      The List must be:
      • Publicly available
      • Prioritized
      • On a time line
      • Living
      • Measurable
      • Targeted
      How to Get Started:

      • For an entire day, write down every task as you work on it (this will be your “finished” list).
      • Organize whatever daily task list you do have into a formal copy of The List.
      • Ask your tech lead to help you prioritize your work and add rough time estimates.
      • Start working on the highest-priority item on The List—no cheating! If some crisis forces a lower-priority item higher, record it.
      • Add all new work to The List.
      • Move items to your finished list as you complete tasks (this makes surviving status reports and “witch-hunts” much easier).
      • Review The List every morning. Update it whenever new work pops up. . . especially the last-minute crisis tasks; you’re likely to forget about those when someone asks you what you on earth you did all last week.
      You're Doing It Right If...
      • Is every one of your current tasks on The List?
      • Does The List accurately portray your current task list?
      • Did the tech lead or customer help you to prioritize The List?
      • Is The List publicly available (electronically or otherwise)?
      • Do you use The List to decide what to work on next?
      • Can you update (and publish) The List quickly?
      I'm a fan of lists but have never used them in this way.  I can see the benefit of getting my manager to help me prioritize things and it should be able to help keep an entire team on track.  My question is this: what is a good tool for making The List?  Is a wiki with RSS publishing good?  Is a white board good?  Project planning software?

      Monday, May 31, 2010

      Ship It! - 2.8 Choosing Tools

      Be sure your tools use an open format like XML or plain text.  It makes integration and reporting easier.

      Tip 11: Use the best tool for the job

      Tip 12: Use open formats to integrate tools

      Never have a vital part of your product cycle (such as the build system) written in a niche or non-core technology, especially if only one developer knows it. Use a technology that anyone in the shop can configure and maintain.

      Never let a critical technology (like your build system) be created as a technology experiment. Use a tool designed for builds to create your builds, not the cool new technology that a team member wants to learn. There are plenty of non-critical areas for technology learning to take place. Never create automated tools that run on only one machine. Never hard-code dependencies, such as network drives. Put everything you need in your SCM system, and the network drives become unimportant.

      Tip 13: Keep critical path technologies familiar

      I must admit I'm a tinkerer and need to be aware of introducing shiny new things at the wrong times.  I've also been on the other end of the stick where I had to use a sub-system written in a technology that I wasn't comfortable with. Only the author knew the technology and we ended up using something that more people understood.