Search This Blog

Friday, July 30, 2010

Spring Security 3 and Digest Authentication

This was a tough one so I thought I'd take a few moments and share what I found.  Context: I'm using Spring 3 as the basis for a set of RESTful services.  The new MVC enhancements makes supporting a RESTful API pretty simple.  What I wanted to do what to lock down the API a bit and turned to Spring Security 3 to make that happen.  Not wanting to enable HTTPS, at least not yet, I turned to Digest Authentication which provides a reasonable authentication mechanism without having to hassle with digital certificates.  Getting Spring Security enabled and, at least initially, authenticating with Basic Authentication was a breeze.  Once I tried to enable Digests Authentication, things got weird.

 Luckily, I was able to locate two resources that steered me down the correct path.  One is the book, Spring Security 3 and another was an article on DZone titled "Using Spring Security to Enforce Authentication and Authorization on Spring Remoting Services Invoked From a Java SE Client".   The upshot is that you cannot rely on the simple namespace configuration that is almost always talked about in the Spring Security 3 literature.  It assumes a JSP type of environment with form login screens et al.  In my case, I'm using just enough of the web stack to allow for HTTP based manipulation of the server-side resource -- no gui.  This means you have to configure things by hand so it'll suit your needs.   Here is the Spring context file I used to enable Digest Authentication for my RESTful services.

Since this was a prototype, I'm relying on the in-memory credential store but you should be able to wire up some more suitable for a production environment.  What I wanted to do was to authenticate at the web layer but authorize at the services layer.  The authentication is handled by the Digest Authentication pieces and the authorization is handled by annotations in service interfaces.  For example:
public interface EchoPort
    @PreAuthorize( "hasRole( 'ROLE_ADMIN' )" )
    EchoResponse echoMessage( final EchoRequest request ) throws EchoError;

I used Apache HttpClient 4.0.1 for testing and was pleasantly surprised to find that it handled both Basic and Digest authentication for me.  All I had to do was to provide the credentials prior to making the call the HttpClient handled the rest.

theClient.getCredentialsProvider().setCredentials( AuthScope.ANY, new UsernamePasswordCredentials( defaultUserName, defaultPassword ) );

I hope this saves you a bit of time.

Thursday, July 29, 2010

Atlassian Stack: Confluence (integration)

Today's goal is to integrate Confluence with Jira and Crowd.  In the Jira admin doc I found a section on integrating Jira and Confluence.  I'll run through that.

  • configured quick search to point back to the Jira URL
  • configured track back, had to also go into Jira
  • user Confluence gadgets in Jira (I those to use trusted applications instead of oauth). I had to configure trusted application on both the Jira and Confluence sides.  After reading the instructions, it became clear that adding gadgets would be a pain -- you have to add them individually.  I've decided to add gadgets only when/if I decide they might be useful.  Enabled the atlassian-gadgest-shared plugin in Confluence
  • enable Jira linker plugin by entering the download URL of the plugin into the Confluence plugin manager screen
  • after a bit of searching I found the Confluence section that describes how to integrate Confluence into Crowd. I did not build a new directory.  I just reused the one I made for Jira instead.  I made the required confluence groups and made one of the users a member of both groups.  copied the jar files and edited the configuration files as instructed. restarted confluence.  Logged in with a crowd user and enabled External user management so that you have to manage users via Crowd.
  • add the "userful extendsions" described in the instructions.  Of the 3 suggested, I could only install the Jira Confluence portlet because the other two wanted me to authenticate for access to accounts I don't have.  Oh, we..
Not as a clean integration as I would've liked. Much of it appears to be plugin driven, which have to install yourself.

Wednesday, July 28, 2010

Atlassian Stack: Confluence (auto-start)

I had to search on "confluence auto start" to find the page that describes how to auto start Confluence under Linux.  Like the other products, the note from Daniel Harvey describing how to configure things for Karmic are the steps you want to follow.  Make sure to make adjustments based on the link so that Crowd starts before Jira or Confluence.  For me, that meant a trip back into the crowd.conf file.  I rebooted the machine and verified that Confluence had started for me.  Easy as pie.

Tuesday, July 27, 2010

Atlassian Stack: Confluence

GreenHopper says that I have to install Confluence so that is what I'll be doing today.  Dragging the Confluence card over from To DO to In Progress gets me started.  I'll be following the Confluence installation guide using the standalone directions.

  1. download and extract the standalone version into /opt
  2. make a soft link to the new directory so I can get to it via /opt/confluence
  3. the docs says that for Debian based Linux you might need to grab some X11 libraries.  Just to be safe, I'll follow their advice and install them: sudo apt-get install libice-dev libsm-dev libx11-dev libxext-dev libxp-dev libxt-dev libxtst-dev. Installation failed because I already have the libs installed (I'm running the desktop version of Ubuntu so the X11 stuff is already available).
  4. created a confluence user and group
  5. created a directory to hold the Confluence data: mkdir /opt/confluence-home
  6. changed the owner ship of that directory and the confluence installation directory to be owned by the confluence account
  7. edited the to point to /opt/confluence-home
  8. Jira is running on ports 8005 and 8080 which is Confluence's default port, so I have to edit the server.xml file to use a new port.  I'll go with 8000 and 8090.  
  9. next, I've got to wire up Confluence to our mysql instance
  10. I chose not to modify the MySQL instance to specify InnoDb for fear of corrupting the existing databases.
  11. using Webmin, I created an new MySQL user named confluenceuser and gave it full privs.
  12. using Webmin, created a new database named confluence making sure to select UTF-8 as the character set and utf8_bin as the collation order.
  13. start up Confluence, connect to http://localhost:8090/ and run through the wizards
  14. I elected to start with an example site instead of an empty one
Everything went off without a hitch.  I did realize, however, that a task needs to be added to have Confluence auto-start at boot time.  The question is, what is the best way to add that story in Greenhopper?  Specifically, how can I make the task dependent?  I decided to go into the "install Confluence" story and make a sub-task.  That new task ended up in my To Do list.  Easy.  I found a workflow issue: I was allowed to resolve the master task despite having two open sub-tasks.  There must be a setting somewhere that prevents that.

    Monday, July 26, 2010

    Atlassian Stack: Subversion and Mercurial Installation

    I've got a few moments so I figured I would take an easy task and install Subversion and Mercurial.  I'm not sure if the Atlassian stack supports Mercurial but I'm going to install anyway, just in case.  In Greenhopper, I moved the two tasks from the to-do list into my in-progress list by simply dragging and dropping the two cards from one column to the other.  I was in the Task Board view at the time.   Using Synaptic, I installed the subversion, subversion-tools,  mercurial and mercurial-server packages.  The mercurial server installation failed with a post-install trigger failure.  Since I'm not even sure if I can use that server, I decided not to worry about the problem.  I created a subversion user and group who will own the Subversion repository area.  I made my local account a member of the subversion group.  As my local user, I created the Subversion repository: svnadmin create /opt/svn.  I then changed the ownership over to the subversion account and made the group bit sticky: sudo chown -R subversion:subversion /opt/svn followed by sudo chmod -R g+s /opt/svn.  Lastly, I modified a copy of the /etc/init/jira.conf  so that it would start Subverion in daemon mode at startup.  I tested the server by using Subversion over SSH: svn list svn+ssh://localhost/opt/svn.  I created a mercurial user and group and added my local account to the mercurial group.  I then created a directory that will hold the various mercurial repositories making sure to set the ownership to mercurial:mercurial and making the group bit sticky.  I'll admit that I didn't do any real testing but I think that when we discover an issue in the future, I'll be able to use the bug tracking features in Greenhopper.  I went into GH, logged the amount of time I worked on each issue and the resolved the task.

    Atlassian Stack: GreenHopper Configuration

    I'm following the GreenHopper 101 document to make sure that the basic configuration is correct.

    1. install the recommended Labels plugin so I can make use of Epics
    2. shutdown Jira: sudo service jira stop
    3. copy the Labels plug into Jira: sudo -u jira cp jira-labels-plugin-2.4.jar /opt/jira/plugins/installed-plugins/
    4. restart jira: sudo service jira start
    5. follow the directions for configuring Jira for Scrum
    6. create a new project category:Installation
    7. create a new project: Atlassian
    8. set the project to use the Scrum template
    9. add Kanban constraints to the project (I don't see the view that the instructions say I should have). I noticed that the Task view wasn't coming up and the error message said it was because the project didn't have a release date.  I added a version to the project and gave it a release date.  Now I can get into the Task view but I don't see the Compact (Kanban) they say I should see.  I looked and Googled but never found anything.  Eventually, I gave up. I might take advantage of my support and see if they can help.
    10. I had to create a new dashboard based on the default dashboard before I could add the Agile gadget indicated by the directions
    11. I opened up GHS-1194 to see if Atlassian can help with the missing Compact (Kanban) menu selection.
    12. Going back into the server after Atlassian commented on my ticket, I found that the missing menu selection was available.  I'm guessing I had to reboot the server to make everything appear.
    13. I was finally able to configure the constraints as described in the document.  Multiple columns can be constrained so I just followed the example and did one.
    14. I should comment that while I was waiting for Atlassian to get back to me, I created 5 stories for Sprint 1 of the project.  I'm not sure if that, plus the server reboot, caused the Kanban menu selection to appear but I wanted to note it none the less.

    Tuesday, July 20, 2010

    Atlassian Stack: Crowd and Jira Integration

    1. Followed the Crowd Administration Guide, specifically the section on adding an application
    2. create a Crowd directory for Jira. I called mine Jira
    3. create three groups in Crowd: jira-users, jira-developers, jira-administrators and add them to the Jira directory
    4. create at least one user who is in all 3 groups
    5. define the Jira application in crowd following the instructions provided
    6. copy the Crowd client libraries into the Jira application. I stopped the jira server.  Removed the existing crowd client jar, copied the new one from /opt/crowd/client and set the permissions so that the file was owned by the jira user and group.
    7. replace Jira's cache configuration file in /opt/jira/atlassian-jira/WEB-INF/classes crowd-ehcache.xml with the one in /opt/crowd/client/conf and reset the file ownership
    8. edit /opt/jira/atlassian-jira/WEB-INF/classes/osuser.xml and add the Crowd specific section described in the instructions
    9. edit /opt/jira/atlassian-jira/WEB-INF/classes/seraph-config.xml as instructed
    10. start jira
    11. log into Jira using the new Crowd user
    12. enable external user management as instructed
    Done. A lot of steps but Crowd now controls user accounts.  Next up: GreenHopper configuration.

    Atlassian Stack: GreenHopper

    1. grab a copy of GreenHopper, standalone version
    2. follow the installation guide
    3. shutdown jira - sudo service jira stop
    4. copy the GreenHopper jar to /opt/jira/plugins/installed-plugins. I also had to change permissions on the file to be owned by the jira user and group.
    5. start jira - sudo service jira start
    6. hit http://localhost:8080/ and log in as the administrator
    7. follow the directions for entering in the GreenHopper license
    That is it.  Tomorrow I'll configure GreenHopper because I intend to use it to help track the installation of the rest of the stack.

    Atlassian Stack: Jira (autostart)

    I found these directions for configuring Jira to auto-start.  The main recipe is for the old System V mechanism but a reader left a note on how to set things up for Upstart, which is what my Ubuntu installation uses.  I'll use his configuration file which I dropped into /etc/init.  All I did was edit his configuration to match my directory names, rebooted and then verified that I could hit http://localhost:8080/.  Worked like a champ.  Onto the next application.

    Monday, July 19, 2010

    Atlassian Stack: Jira

    Next up is Jira, the task/bug tracking portion of the stack.

    1. Grabbed the standalone version and unpacked into the /opt directory.  
    2. Made a soft link to the install so upgrades will be a bit easier: ln -s atlassian-jira-enterprise-4.1.2-standalone jra
    3. edit /opt/jira/atlassian-jira/WEB-INF/classes/ so that jira.home points /opt/jira
    4. used Webmin to create a user just for the Jira process.  I called mine, wait for it...jira.
    5. changed ownership of the jira installation directory to be owned by the jira user
    6. start up jira as the jira user: sudo -u jira /opt/jira/bin/
    7. I got a PermGen switch warning which pointed me to the troubleshooting guide.  Decided not to worry about it until I actually do run out of PermGen space
    8. I realized that the instructions did not walk me through how to connect Jira to my database so I shutdown the instance and followed those directions.
    9. using Webmin, I created a new db user named: jirauser and gave it all permissions
    10. using Webmin, created a new database name jira making sure to set it to use UTF-8 encodings
    11. copy the MySQL JDBC driver to the /opt/jira/lib directory.  Once I did that, I saw that a JDBC driver already existed in that directory, although a slightly older one, so I removed mine and left things as is.
    12. run the Jira configuration tool: sudo -u jira /opt/jira/bin/ .  I had a problem doing this because the jira account isn't meant for logging in and doesn't have an environment.  To work around this, I reset the permissions on the installation directory to my account and re-ran the tool.  Once I made the proper configuration choices, I reset the files back to being owned by the jira account.
    13. started jira again using the command in step 6
    14. connected to Jira via the web browser: http://localhost:8080/
    15. ran through the configuration wizard. Decided to not accept the default directories and use /opt/jira-home instead.  Figured I wouldn't lose data to upgrades if I did a stand alone directory.
    That's it.  Seems to work.  There are many configuration options that I'll need to plow through but I'll have to first learn how to get Jira to autostart at system bootup, like I did for Crowd.

    Sunday, July 18, 2010

    Atlassian Stack: Crowd (auto-start)

    One final tweak I need to make to Crowd is to get it to start automatically at system start up.  I'm using the Setting Crowd to Run Automatically and Use an Unprivileged System User on UNIX directions to make that happen.  Here are the basics of the steps I ran:
    1. using Webmin, create a crowd user and a crowd group, making sure to not allow the crowd user the ability to actually log in
    2. changed ownership of the crowd installation directory and the crow-home directory so that they they were owned by the new crowd user and crowd group: sudo chown -R crowd:crowd /opt/atlassian-crowd-2.0.6/ and sudo chown -R crowd:crowd /opt/crowd-home/ (the provided script in the instructions did not appear to change all the permissions needed).
    3. used the /etc/init/crowd.conf provided by a reader in the notes (current Ubuntu uses a slightly different startup/shutdown mechanism: upstart) instead of the Sys V script provided in the doc
    4. issued sudo service crowd start from the command line to start up crowd
    5. verified crowd had started by hitting http://localhost:8095/crowd/

    Saturday, July 17, 2010

    Java Testing and Design - 2.4 The Web Rubric

    As with many things, clearly stated objectives and criteria are useful in determining the health of your system.  A rubric has certain advantages:

    1. assessment is more objective and consistent
    2. helps the testers focus on clarifying criteria into specific terms
    3. clearly describes to the developer how her work will be evaluated and what is expected
    4. provides benchmarks against which the developers can measure progress
    Here is an example ruberic:

    Criteria assessed through system useLevel 1 BeginningLevel 2 DevelopingLevel 3 StandardLevel 4 Above Standard
    Basic features are functioningFew features work correctly the first time usedMany features do not operate. Some missing features required to complete workMost features operate. Workarounds available to complete work.All features work correctly every time they are used.
    Speed of operationsMany features never completed.Most features completed before user lost interest.Most features completed in 3 seconds or less.All features completed in 3 seconds or less.
    Correct operationFew features result successfully without an error condition.Some features end with an error condition.Most features complete successfully.All features complete successfully.

    Ruberics can take many forms but typically offer the following:

    1. focus on measuring stated objectives (performance, behavior or quality)
    2. use a range to rate performance
    3. contain specific performance characteristics arranged in levels indicating the degree to which a standard has been met.
    A ruberic's job is to help remove subjectivity when assessing the health and quality of the application.

    Friday, July 16, 2010

    Java Testing and Design - 2.3 Web-Enabled Application Measurement Tools

    The best measurements of health for a web application include:

    1. meantime between failures in seconds
    2. amount of time, in seconds, for each user session, sometimes knows as a transaction
    3. application availability and peak usage periods
    4. which media elements are most used (HTML, Flash, JavaScript, etc.)
    Developing criteria based on these elements is difficult but author promises to offer help in upcoming sections.  One question I had when I read this section is how to accurately determine what a failure is?  If you are trolling through server logs looking for error messages, you are likely to find lots of them.  What do I have to do in my application to make it easier to identify a real problem from a benign one?

    Thursday, July 15, 2010

    Atlassian Stack: Crowd

    I finally got off my lazy butt and shelled out a couple dollars and purchased starter licenses for the entire Atlassian stack.  I've been a user of a few of their components for a few months now and liked what I've seen so far.  I'm of the opinion that most of the power comes when everything is fully integrated.  I'm always writing Java programs, mostly to learn new frameworks, but have come up with an idea for something on a larger scale.  I'm thinking I can use the Atlassian tools to help guide me through the specification and development process.  I'll blog my journey from installation to day-to-day use.  Wish me luck.

    I've used the free, personal-use licenses that Atlassian used to provide and always ran into a problem with their licensing mechanism: it is tied to a machine.  If I wiped my laptop to install the latest version of Ubuntu, I would have to contact Atlassian in order to get a new key.  Hoping to avoid that pain, as well as providing a "time machine" I'll be installing the stack into a virtual Ubuntu server machine based on Virtual Box.  Here are the basic steps I've used to get started:

    1. created a 20 GB Virtual Box VM based on Ubuntu 10.04 Server edition
    2. added the Webmin repository and installed Webmin.  The box will be headless and I'll need a tool to help me manage the box remotely.
    3. updated the box so it is patched to current levels
    4. downloaded and installed the current Sun Java 6 JDK taking care to export both JAVA_HOME and JDK_HOME in the /etc/environment script.  In that script, I also appended the path to point to the JDK bin directory.
    5. installed Virtual Box Guest Additions using these instructions 
    6. using Webmin, installed mysql-server and mysql-client  (I cheated and looked ahead at the installation directions and knew that a database was needed)
     That gives me a basic Java enabled machine to work with.  The plan is to take a snapshot prior to the installation of new server giving me the chance to rollback in case something really goes wrong.

    Crowd is a single-signon solution and I figured it makes sense to be the first service to install.  The documentation says that deploying their applications into a single Apache Tomcat instance is not supported so I'll use their standalone versions instead.

    1. download and unzip Crowd.  I placed mine in /opt
    2. created /opt/crowd-home and edited /opt/crowd/crowd-webapp/WEB-INF/classes/ to point to it
    3. using Webmin, I created a MySQL user named crowduser and gave the account all db rights
    4. using Webmin, I created the Crowd database, named crowd, taking care to specify UTF-8 as the character encoding
    5. changed the isolation level on MySQL: sudo vi /etc/mysql/my.cnf and added transaction-isolation = READ-COMMITTED  into the [mysqld] section
    6. using Webmin, restarted MySQL so the setting would take affect/li>
    7. download the JDBC driver and copy the JAR file to /opt/crowd/apache-tomcat/lib
    8. start crowd via
    9. connected to Crowd via http://localhost:8095/crowd
    10. followed the steps in the setup wizard
    That was it.  I have no idea how to set up the application but a quick glance at the admin screens makes it look at though adding users and roles will be pretty simple.

      Java Testing and Design - 2.2 Defining Criteria for Good Web Performance

      The author asserts that three criteria must be checked if you are to determine if your web application is healthy:

      • are the features working?
      • is the performance acceptable?
      • how often does it fail?
      Create a short list of the most important features of your application and create automated tests for them.  Consult real users and find out how long they will wait for a feature before abandoning it and moving on to something else.  Consult your server logs to find out exactly how often the system fails. Once you know that, you decide if the failure rate is too high for your situation.

      Wednesday, July 14, 2010

      Java Testing and Design - 2.1 Just What Are Criteria?

      Web-enabled applications need to be constantly tested, monitored and watched, primarily due to the fact that the user's expectations of the application will change over time.  At first, they might care that the application worked.  Later, they might care about how fast it works?  Even later, they might care if it does exactly what they need.  You can use test agents to test your application daily.  If you build your agents based on archetypes you can gather meaningful data that can help you gauge whether or not your application is meeting your user's needs.

      The more time spent on defining archetypes, the easier it will be for your team to understand exactly what the system needs to do.

      1. Archetypes make an emotional connection between the goals of a prototypical user of the application and the software team.  Team members will have an easy-to-understand example of a person that prototypically will use your application.  Rather than discussing individual features in the application, the team will be able to refer to functions that the archetype will want to regularly use to accomplish their personal goals.
      2. Archetypes bring discussions of feature, form, and function in the application from a vague general topic to a focused user goal-oriented discussion.
      3. Archetypes give the team an understanding of where new functions, bugs, and problems need to be prioritized in the overall schedule.  
      Archetypes make it easier to know what to test and why because the test agents, modeled after the archetypes, will be testing in a way that is similar to your actual users.  Archetypes can also be helpful in identifying when performance becomes a problem.

      Tuesday, July 13, 2010

      Java Testing and Design - 1.5 Testing Methods

      This section describes the following testing methods:

      • click-stream testing
      • unit testing (state, boundary, error, privilege)
      • functional system testing (transactions, end-to-end tests)
      • scalability and performance testing (load tests)
      • quality of service testing
      Click-stream testing is when you keep track of the clicks a user performs as she navigates your sites.  The thinking is that more clicks equals more advertising revenue.  This type of testing doesn't tell you if the user was able to achieve his goal when visiting your site and isn't very useful when trying to determine the quality of your software.

      Unit testing is when a developer tests an individual software module by providing it various inputs and comparing its against expected outputs.  Although useful to developers, module based testing doesn't tell you how useful or healthy the entire web application is.  Don't rely solely on unit tests when assessing your system's health.

      Functional system testing takes the perspective closer to that of the end-user in that they check the entire application from end-to-end.  A system test will start at the browser, travel through the web servers, individual software modules, the database and then back.  When system testing, you have to understand your system well enough to know exactly what front-end actions trigger interactions between the various back-end components.  Bringing up a page which is a simple static HTML isn't much of a test because it doesn't trigger software modules to fire that access the database and other integration points.

      Scalability and Performance testing tries to understand how the system behaves with concurrent users accessing the system.   Typically, the functional tests can be used to drive the system and obtain loaded system performance metrics.

      Quality of Service testing attempts to show that the application is meeting service level agreements.  Functional tests should be used to monitor the web application's health.  Done over long periods of time, you can get a picture of what is normal and what is not.

      A test agent is a framework or process that allows for automation of the system's test.  Typically, an agent must provide the following:

      • checklist - defines the conditions and states of the test, such as being able to log into the system
      • test process - lists the system transactions that are needed to perform the checklist
      • reporting method - records the results after the process and checklist are completed
      The more automated a testing agent is the easier it is to certify the health of the application.

      You can use your automated test agent to test for scalability and performance.  Scalability describes the system's ability to serve users under varying levels of load.  Run your tests when the system has 1, 100, 500 and 1000 concurrent users and measure how long the tests take to respond and you have an idea of how scalable your system is.  Performance measures the system's ability to deliver accurate results under load.  If the system responds quickly (scales) but returns incorrect results or errors it isn't performing very well.  Taken together, performance and scalability can tell you how your system behaves under load.

      Creating functional tests can be difficult because it can be difficult to know exactly how the tests should interact with the system.  One idea to try are Archetypes.  Construct several testing "personalities" which describe different types of system users and capture the way they interact with the system.  For example, a field agent might be somebody who does lots of reads and relies heavily on the querying capabilities of the system.  A manager might be more of a power user who uses the reporting portion of the system to generate his monthly status reports to his boss.  The more detailed archetype you can create the better your test agents will be and the better tested your system will be.  In practice, automation of the test agents is the only practical way of running the tests in any consistent and meaningful way.

      Monday, July 12, 2010

      Java Testing and Design - 1.4 A Concise History of Software Development

      This chapter shows how computer programming grew from very small and tightly controlled environments to the large, free-for-all that is today's web environment.  The interesting part of the discussion revolve around the modern software development processes.  A flow that the author has termed The Internet Software Development Lifecycle looks like this:

      1. Specify the program from a mock-up of a Web site
      2. Write the software
      3. Unit test the application
      4. Fix the problems found in the unit test
      5. Internal employees test the application
      6. Fix the problems found
      7. Publish software to the Internet
      8. Rapidly add minor bug fixes to the live servers
      The web-style of software engineering makes use of many interconnected systems living all over the network. The same decentralization can also be applied the software components themselves as many parts of an application are implemented using shared software.  The author calls to our attention the current "experiment" with open source software development and distribution as well as agile programming.

      Next up, Testing Methods...

      Sunday, July 11, 2010

      Java Testing and Design - 1.3 Why Writing High-Quality Software Today Is Hard

      In this section, the author offers 5 prime reasons why creating software is difficult:

      1. The Myth of Version 2.0 Solidity - this is when people believe that scrapping the 1.0 product and starting from scratch will create a superior product.  A rewrite is a major overhaul and in the author's experience a more moderate approach has proven its worth.  There is more stress to get the rewrite to the same state that the 1.0 is at and there is no "down time" for developers to recharge their batteries.  A smaller, iterative technique allows for constant improvements without the burnout.  
      2. Management’s Quest for Grail - this is when management lays down the law and says all projects and all teams shall use methodology X.  The author contends that there are different programming styles which require different degrees of architectural direction: Users, Super Users, Administrators, Script Writers, Procedural Programmers, Object Programmers, Interoperability Programmers and Orchestration Programmers.   A team must select a methodology that matches the ability and style of the team members.  Forcing a team of Script Writers to adopt SCRUM might not be a wise decision.
      3. Trying for Homogeneity When Heterogeneity Rules - trying to standardize the entire company on a single platform is not practical because you need to exorcise the "impure" parts of the existing solutions, which never happens in practice.    Solutions were built upon technologies that made sense at the time and provide useful functionality so deal with it.
      4. The Language of Bugs - instead of using bug numbers as the vocabulary to describe if a system is working correctly, why not use a test agent that is modeled after a single user?  Choose just one user, understand their goals, and learn what steps they expect to use. Then model a test against the single user. The better understood that individual user’s needs are, the more valuable your test agent will be.
      5. The Evil Twin Vice Presidents Problem - this is the scenario where development and test is run by one person and the data center is run by another.  This promotes sniping between the two groups where cooperation would be more productive.  The author recommends providing a common testing framework that is used by development and test to run functional, scalability and performance tests.  The IT guys can leverage that framework to verify the health of the system and provide QoS reports. 

      Saturday, July 10, 2010

      Java Testing and Design - 1.2 Web-Enabled Applications, the New Frontier

      In this section, the author describes how software has been architected in the past.  The interesting point is how he sees the new emerging architecture, namely one where the desktop contains both presentation and some business logic but much of the business logic lies elsewhere, out on the web somewhere where it accessed by RESTful and SOAP APIs.  He believes that the driving force behind the new architecture are a set of "firsts":

      • This is the first time in the information technology (IT) industry that software developers, QA technicians, and IT managers agree on interoperability architecture.
      • This is the first time that developers, QA technicians, and IT managers agree that security needs to be federated.
      • This is the first time that developers, QA technicians, and IT managers agree that scalability in every application is critical to achieving our goals.
      • This is the first time that government regulation is part of our software application requirements.
      • This is the first time that interoperability tools and technology is delivered with integrated development environment (IDE) tools
      • This is the first time that Java and other interoperable development environments and open source projects have reduced operating system and platform dependencies to a point where, in many cases, it really does not matter any more.
      In summary, now that we have standards, tooling and infrastructure that allow applications to inter-operate, so we'll be doing lots more of it.  I use a nifty little program called XBMC to serve up media files in my home.  One of its nicest features is its ability to show me information about the movie, TV show or song I'm thinking about playing.  It can do this because it can use the web, and its standard protocols, to mine the internet for information.  XBMC wouldn't be nearly as useful if it couldn't leverage the  data contained in other applications.

      Friday, July 9, 2010

      Java Testing and Design - 1.1 The First Three Axioms

      I'll be reading Java Testing and Design: From Unit Testing to Automated Web Tests by Frank Cohen and summarizing the chapters.  After skimming it I think it'll be an interesting read.

      The author briefly describes his first Basic program and progresses to a more modern programming language.  The upshot is that he learned the three axioms of development:

      1. even though a program contains only a single line of code, it needs to be maintained
      2. every piece of software inter-operates with other software
      3. there is more to delivering high-quality software than writing code
        It is these axioms that help the author to deliver quality, web-enabled Java software.  Some of that software is TestMaker, an open-source test automation utility for which the author is the architect.  Hopefully, these axioms will allow the author to show us some techniques for improving our ability to write better software.

      Thursday, July 8, 2010

      Prefactoring: Plan Globally, Develop Locally

      Plan Globally, Develop Locally. Incremental implementations should fit into a global plan.

      The main idea here is that as you go through your use cases and iterations, take few moments examine your objects to see if their structure makes sense, in broad terms, with the existing use cases as well as the planned ones.  You can't know everything you'll need to know before you start typing code but you can look at your planned implementation for the sprint to make sure you won't have to do a major refactoring later on.  If possible, try and keep changes as small as possible.  Changing the implementation of a class is less painful than restructuring an entire inheritance hierarchy. 

      Wednesday, July 7, 2010

      Prefactoring: Don't Repeat Yourself (DRY)

      Don't Repeat Yourself (DRY). Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

      This is a really powerful idea to grasp and sometimes very hard to adhere to. Working on systems where you have to remember to change 4 different files just to make a single logical change is painful and error prone.  If you know you want to stay DRY, and who doesn't, then you start to see your solutions in a different light.  Sometimes you have to write a custom tool that churns through an XML file to generate the multiple pieces the system needs to support a new feature but at least you have a canonical definition of that feature.  One tool that might help to stay DRY is MPS.  You can create a custom language to describe the solution to a problem, such as generating Hibernate and JAXB aware Java classes from a single definition file.  Having to look multiple places to make sure a change gets through is going to cause a failure in the system at some point.

      Tuesday, July 6, 2010

      Prefactoring: Document Your Assumptions and Your Decisions

      Document Your Assumptions and Your Decisions.  Keep a journal for retrospectives.

      The human memory isn't very good -- at least mine isn't -- so it makes sense to keep a journal of the design decisions you've made as you've implemented the system.  The code will tell you what decision you made, but often times it won't tell you why you made that choice.  As the system evolves, those whys can become important as new decisions need to be made.  Did you assume that the system would be operating on a fast, reliable network?  Did you assume that nobody would ever want to manipulate that particular sub-system so no RESTful API was put in place?  Having a search capable repository of your decisions is useful in retrospectives as well as answering the age old question of "why did we do that again?"  I use a wiki to keep my notes, mainly because it is easy to use and is easily searched.

      Monday, July 5, 2010

      Prefactoring: Think About the Big Picture

      Think About the Big Picture: Decisions within a system should be congruent with the big picture.  What does that mean? It is all about context, with the context being the system's architecture and business purpose.  The definition of architecture that I've come to love is "important things that are expensive to change".  For some systems, the message format could be considered part of the architecture because it spikes through all layers of the system.  For other systems the use of the Java EE stack would probably be part of the architecture because it would be a lot of work to shift to a POJO model.  If you can somehow get the entire team to share a vision of the architecture and make sure the important decisions are made based on the context of the architecture and its primary purpose, the project has a better chance of succeeding.

      Sunday, July 4, 2010

      Prefactoring: Don't Reinvent the Wheel

      Today, I'd like to begin summarizing some points from the book Prefactoring: Extreme Abstraction, Extreme Separation, Extreme Readability by Ken Pugh.  The book is organized as a series of stories that highlight a particular guideline.  Each post will focus on a single guideline.  Today's is Don't Reinvent the Wheel: look for existing solutions to problems before creating new solutions.

      As a developer, I'm as guilty as the next guy of Not Invented Here syndrome but I've come to appreciate the fact that if I'm going to get paid, I need to work on solving the problem and not creating a shiny new tool that I think will help me to solve the problem.  Sometimes I can grab an entire product from the open source community and solve the problem but often I have to integrate many pieces together in order to get things done.  For me, I tend to gravitate to the Spring portfolio of solutions.  I'm comfortable with the programming model and they tend to stay out of my way and let me focus on the problem at hand.  Most importantly, they free me up from having to write yet another custom configuration file parser or MVC web framework.  The faster I can create a solution the more I stay interested and the happy my boss is.

      Saturday, July 3, 2010

      Ship It! - Summary

      I really enjoyed reading this book.  It isn't huge tome and makes you think.  I like the fact that they an a la carte way of looking at process.  I think that the discussion around Tracer Bullet Development to be particularly interesting.  I think this book is worth the time and money so go out and buy yourself a copy.

      Friday, July 2, 2010

      Ship It! - 5.18 We're Never Done

      How do you decide what to do? Use The List to break down the product into individual features. You’ll know your project is finished when all the features on The List are implemented and are working well together (of course, feature integration should be one of the tasks on The List!).

      Feature list guidelines:
      • Break down any item with a time estimate of more than one week into subtasks. It’s okay to have a top-level task that takes weeks or months, but that estimate is just a guess unless you back it up with estimates for its subtasks.
      • Any item that takes less than one day is too small. If an item can be scheduled for a time period shorter than a day, that item is probably too low-level for The List.
      • A single customer example (or use case or scenario) can involve multiple features in The List. Don’t try to force an entire example into a low-level item in The List; break it into subtasks.
      • Add priorities to the items on The List, then stick with them. Don’t work on a priority-two item while there are priority-one items that are unfinished. But feel free to change those priorities as it becomes necessary.
      • Assign specific people to each feature on The List. You can do it on the fly (as one person finishes a task, they “sign up” for another), or you can assign them all up front, and then change them as necessary. It depends on how your team works best.
      • Be flexible. Use change to your advantage. Changing The List means that you’re improving and refining it, that you’re getting customer feedback, and that you’re matching The List with the real needs of your customers.
      Tip 40: The list is a living document.  Change is life.

      Don’t just remove an item from The List after it’s finished. Keep a copy of completed features (along with dates, priority, and assigned developer) to use as your audit trail. When you’re asked, “Why didn’t feature X go in?” or “What features did go in?” The List will give you the answers.

      Tip 41: If it's not on The List, it's not part of the project.

      Tip 42: Always give feedback fast

      Thursday, July 1, 2010

      Ship It! - 5.17 Features Keep Creeping In

      When someone requests a new feature, look at the tasks your team is currently implementing. Is there time to implement the new feature before the next delivery? Will the new feature work with the already-scheduled features? Will it even make sense in the context of the planned product?If the answer to these questions is “yes,” then by all means add the feature to The List. Decide how important it is compared to the other features, and assign it a priority and a developer to implement it.However, if the answer to one or more of these questions is “no,” then don’t add the feature, regardless of the pressure you get from the requester. Instead, gently but firmly use The List to explain why you won’t implement the feature.