Monday, April 18, 2016

EA 873 Week 13 Tracking and Measuring Performance


As always, hello! Welcome back, if you are returning! Thanks for reading, if this is your first time! I am a little late getting my post done this week  Life got in the way, but I guess that is what this is all about...balancing the two!


I was going over some readings this week and I was particularity intrigued with the Gartner presentation on Developing High-Impact EA Performance Metrics. This can be viewed here:

 http://www.gartner.com/document/1413013

The presentation touched on why we lose our way when it comes to strategy execution.  I have seen this in many forms and if there is one theme about my post this semester it has been that execution is crucial but only achievable with consensus and direction.  The article goes on to discuss that EA efforts should be guided by this strategic direction, becoming, essentially, the bridge between strategy and execution.  To act as this bridge, we must measure EA performance. This measurement should be through the use of metrics, giving EA and management the ability to track the overall performance of the EA efforts.

I can not agree more.  Yet in my experience, this breaks down because most organizations have difficulty defining, building and managing core metrics.  I talk about the process a little in my week five post here:

Week 5 Blog Post


I touch on the metric discussion briefly near the end and on utilizing Goal, Question, Metric or GQM.

To restate

Per, Basili, V. , et. al. The Goal Question Metric Approach, GQM is

“A technique that is based on the assumption that for an organization to measure in a purposeful way it must first specify the goals for itself and its projects, then it must trace those goals to the data that are intended to define those goals operationally, and finally provide a framework for interpreting the data with respect to the stated goals.”  

http://www.cs.umd.edu/~mvz/handouts/gqm.pdf

In other words, GQM is a hierarchical structure starting with a goal which specifies the purpose of measurement, object to be measured, issue to be measured, and viewpoint from which the measure is taken.

There are three levels to GQM.

Level 1 Conceptual (GOAL) defined for an object from various points of view relative to a particular environment:


  • Products: Artifacts, deliverables, and documents that are produced during the system life cycle; e.g. models, components, test suites.
  • Processes: Software related activities commonly associated with time; e.g. modeling, designing, testing.
  • Resources: Items used by the processes to produce their outputs; e.g. personnel, hardware, software, office space.


Level 2 Operational level (QUESTION) A set of questions used to characterize the way the assessment/achievement of a specific goal is going to be performed based on some characterizing model (the RBM).
These questions try to describe the object of the measurement (product, process, resource) on a selected quality issue and to determine its quality from a selected viewpoint.

Level 3 Quantitative level (METRIC)  Data associated with every question to answer it in a quantitative way.  The data can be:


  • Objective:  If they depend only on the object being measured and not on the viewpoint from which it is taken; e.g., number of versions of a document, staff hours spent on a task, size of a component.
  • Subjective:  If they depend on both the object that is being measured and the viewpoint from which they are taken; e.g., readability of a text, level of user satisfaction.


Here is a depiction of that structure that better represents the hierarchical structure



Now let us look at that in an example. I have tried to use something that can relate to project level work such as change request turn around time. Working from a current state to future state one of the things we identified in a SWOT is the ability to be reactive. As the business changes, so should the processes and improvements the EA efforts are overseeing.  Thus, we want to track improvements from the viewpoint of the project manager.




This provides a very measurable approach to the metric that everyone can agree to, measure and report the change over time.

Let me know what you think or if I am off base here. I would like to hear how others have built a consensus in their efforts around performance and tracking EA efforts.

Thanks again for reading!

8 comments:

  1. Joe,
    Thanks for sharing and expanding on the GQM technique for measurements. It's truly a great read.

    The question phase is such a simple effective technique that we forgot how we used it even during our own formative intellectual development as young students. Your illustration of a second follow-up question phase, was even more instructive showing how we can place layers of questions until we arrive at a meaningful metric. Your GQM example on the 2nd question also reminds us of the important aspect of measurements, which to my mind, is actually the whole point of the exercise -- that is, to take the measurements to gain insights on how to improve on the next EA development cycles. Your details really made a difference and brought home the clarity.

    As you point out, the GQM technique provides a very methodical approach to the formulation of a clear set of metrics that everyone can agree to, acquire measures, and report the changes/improvements over time. As I have placed comments elsewhere, the Gartner notes provide sound and logical guidance, but we really need more detailed discussions to arrive at meaningful measurements.

    Thank you for always contributing helpful details.
    /

    ReplyDelete
    Replies
    1. Thanks for reading and commenting! You are so right the question phase is key. So many times I have seen metics developed and then not used because they don't address the question being asked! When I was first introduced to GQM the tech side of me was "Ok I can see how this relates to a how to calculate a core metric". In practice I have seen what happens when the first performance metrics are reported based on GQM. Typically everyone is focused and not confused because it addresses the goal and question of the end user.

      Thanks again for your comments.

      JC

      Delete
  2. Quite reasonable and easy to understand. Defining good metrics and tying them to goals is critical. What I have found is that defining and implementing must both occur for metrics success. Obvious observation, I know, but I have found that teams design or reengineer a process, ensure it is value-added and streamlined, define metrics, but come implementation, they go back to old habits of not collecting the metrics.

    So my only recommendation is that this be supplemented with a value-to-effort assessment for each metric. How will the collection of each metric increase workload? Sure, you may improve product quality, but then find that the costs of metrics collection, assessment and analysis has eaten away at revenue or end value. Implementing metrics that can be captured as part of the process (various ways this can be achieved) go a long way to making a successful quality management system.

    ReplyDelete
    Replies
    1. Lariel,

      I could not agree more with your comment. Implementation has to be considered for the reasons you state. I have seen that killer several initiatives. Adding something to account for that would be a key addition here..
      JC

      Delete
  3. Joe - great post. Thanks for going into detail on the GQM technique. I find it's just as important to be able to show progress or even prove value in projects at any given point of time as it is to actually be making progress on the project itself. For example - in my current organizations we've heard of projects that have been shut down because they weren't able to prove value/provide any metrics on the actual progress that was made. If you can't show value is being added, you are already behind the proverbial eight-ball. Having this technique of measuring value is crucial.

    ReplyDelete
    Replies
    1. Nick,

      Thanks for commenting. You are absolutely correct about showing value which beings with being able to measure progress. Lariel makes a good point about understanding the feasibility of implementing a given metric. That can be a show stopper as well.

      JC

      Delete
  4. Nice explanation. Something caught my eye early in your post about "only achievable with consensus and direction". I strikes me because I've been involved in initiatives with EA that were "consensus" driven, authoritarian, and a blend of both. Consensus is always interesting but slow at best and completely impossible at other times. Good explanation of GQM, perhaps it's a good tool to help drive consensus.

    ReplyDelete
    Replies
    1. Bill,

      Thanks for commenting. You are certainly correct about consensus. I think consensus is managed best in small sizes. Easier for someone to swallow, so to speak ;-). Really though getting consensus can be time consuming and a pitfall in itself. Building consensus along the way is a good approach and what I have done with this is show that if we can agree on these , smaller artifacts maybe we can agree on what we build on top of this. You have to know when to stop and drop it or move on to your point.
      Thanks again!
      JC

      Delete