How Do You Measure Development Team Performance? Lessons from Glinteco

By hientd, at: Feb. 2, 2025, 10:08 p.m.

Estimated Reading Time: __READING_TIME__ minutes

How Do You Measure Development Team Performance? Lessons from Glinteco
How Do You Measure Development Team Performance? Lessons from Glinteco

In 2023, Glinteco's development team was drowning in metrics that didn't matter. We tracked velocity points, lines of code, and ticket completion rates religiously and still delivered projects that clients weren't satisfied with. The wake-up call came during the Endeverus build when our velocity was "excellent" at 45 points per sprint, but we were three weeks behind schedule because we measured speed over value. A single critical payment integration bug took two developers three days to fix, but our metrics counted it as "1 ticket closed" same weight as updating documentation.

 

This article shares the four metric failures we experienced across real Glinteco projects including Endeverus (marketplace connecting 10K users), STLPro (processing 100K orders/day), and an unnamed Australian real estate client. You'll see what metrics we tracked initially and why they failed, the specific changes we made with measurable before/after results, and the custom "weight point" formula we developed to measure actual value instead of output volume. Written for CTOs and engineering managers struggling with similar metric problems.Measuring development team performance is a complex but vital task. At Glinteco, we’ve faced numerous challenges while trying to balance metrics that drive results with those that truly reflect the value our teams deliver.

 

You'll understand why velocity and ticket counts fail to measure real value through specific Glinteco project examples, see our custom weight point formula that factors priority, complexity, and client impact (with actual calculation breakdown), learn how we reduced bugs by 47% in Endeverus after changing metrics from quantity to quality focus, and discover the retrospective framework that improved team collaboration scores from 6.2/10 to 8.7/10 over six months.

 

Here’s what we’ve learned through experience.

 

About the Author

 

Written by Nguyen Cong Khoa

 

Nguyen Cong Khoa is DevOps Lead at Glinteco with 5+ years managing development teams across Australia, Vietnam, and Japan. He's personally experimented with (and abandoned) 7 different performance metric systems before finding the approach that actually works. This article compiles lessons from managing 15+ Glinteco client projects including Endeverus, STLPro, and DSTax where metric changes directly impacted delivery quality and team satisfaction.

 

More team management posts | Connect on LinkedIn

 

Challenge 1: Numbers Don’t Tell the Full Story

 

In Q2 2023 during the Endeverus build (marketplace platform connecting students with businesses), our velocity metrics showed excellent progress 45 story points per sprint, 92% sprint completion rate. Management was happy. But three weeks before launch, we discovered critical payment integration issues that threatened the entire October timeline. The bug required two senior developers three days to fix, debugging Stripe webhook failures and race conditions in transaction processing.

According to our metrics, this counted as "1 ticket closed," same weight as a junior developer updating README documentation. Our velocity stayed high because we completed many small tasks, but we were systematically undervaluing complex, high-impact work. The Endeverus founders were frustrated: "Your reports say you're on track, but critical features aren't working." According to research on software development metrics, 64% of development teams struggle with this exact problem, tracking activity instead of outcomes.

Early on, we relied heavily on quantitative metrics like the number of completed tickets or lines of code written. While these were easy to track, they didn’t capture the complexity or impact of the work. A single bug fix in a critical system could be far more valuable than completing five low-priority tasks, but the metrics didn’t reflect this.

 

What We Did

 

We shifted our focus to value-based metrics. Instead of measuring quantity, we began assessing the impact of deliverables. For example:

  • Key Deliverable Metrics: Measuring successful feature rollouts or bug-free deployments.
     
  • Client Feedback: Regular check-ins with clients to evaluate satisfaction with delivered results.
     
  • Weight Point: We designed a formula to calculate the issue weight based on priority, difficulty, and performance wise
     

After implementing value-based metrics in Q3 2023, we measured impact over the next 6 months across three client projects (Endeverus, STLPro, DSTax). Bug rates in production dropped 47% compared to previous projects using velocity-based metrics, from average 8.3 bugs per sprint to 4.4 bugs per sprint. Client satisfaction scores on delivery quality increased from 7.2/10 to 8.9/10 based on post-project surveys. Team morale improved measurably: our internal quarterly survey showed "feeling valued for contributions" jumped from 6.1/10 to 8.4/10.

 

Github tracking issues

 

Challenge 2: Collaboration Is Hard to Quantify

 

A cohesive team delivers better results, but how do you measure teamwork? We initially struggled to assess collaboration effectively, as metrics like velocity often highlighted individual contributions rather than group dynamics.

 

Another simple solution is to test how the team members understand each other with GAMES

 

What We Did

 

We introduced retrospective meetings and peer reviews as part of our process. These sessions gave team members a chance to:

 

  • Share feedback about working together.
     
  • Highlight areas where collaboration could improve.
     

One standout moment was when a developer shared how brainstorming sessions with peers had led to an innovative approach that saved time on a complex task. These anecdotes showed us that fostering collaboration was as important as tracking tasks.

 

Challenge 3: Aligning with Client Goals

 

In September 2023 while building a real estate listing API for a Melbourne client processing 50,000+ properties, we optimized for deployment speed to meet their aggressive timeline. We pushed code daily, maintained 98% sprint velocity, and hit every deadline. Three months post-launch, the client came back frustrated: "The system works but we can't modify it without breaking things. Our maintenance costs are 3x what we budgeted."

 

The problem was we measured delivery speed without measuring code maintainability. We hadn't tracked technical debt accumulation, code complexity scores, or test coverage, all critical for systems the client needed to maintain long-term. Similar to patterns we've seen in other infrastructure projects, optimizing for speed without maintainability creates expensive problems later.

 

We optimized development for speed, only to realize later that the client valued maintainability over quick delivery. This misalignment taught us that performance metrics should always align with client expectations.

 

What We Did

 

We started every project by understanding what mattered most to our clients. Some wanted fast turnarounds, while others prioritized long-term scalability or user experience. Based on these discussions, we tailored our metrics, tracking things like:

  • Delivery Timeliness for fast-paced projects.
     
  • Scalability Metrics for those emphasizing maintainability.
     

One client noted how this approach gave them confidence that our team was working with their priorities in mind, strengthening our partnership.

 

Challenge 4: Balancing Innovation with Delivery

 

Innovation often takes time, which can lower traditional performance metrics like velocity or sprint completion rates. For example, a significant refactor in one of our legacy systems delayed delivery but ultimately reduced long-term maintenance efforts.

 

What We Did

 

We allocated "innovation sprints" to focus on technical debt and architectural improvements. Metrics for these sprints included:

  • Technical Debt Reduction: Tracking issues resolved and system improvements made.
     
  • Code Maintainability Scores: Using tools to assess the quality of the refactored code.
     

The outcome was clear: fewer bugs, smoother deployments, and a happier team proud of their technical contributions.

Specific metrics from our Q4 2023 innovation sprint on the Endeverus codebase: Technical debt tickets reduced from 47 to 12 over one month, deployment time decreased from 45 minutes to 8 minutes after refactoring CI/CD pipeline, and production bugs in subsequent sprints dropped 38% (from 6.5 bugs/sprint to 4.0 bugs/sprint). Code maintainability score measured by SonarQube improved from C-grade to A-grade. Team satisfaction with codebase quality went from 5.8/10 to 8.1/10 in our internal survey.

The refactor delayed feature delivery by three weeks, but prevented an estimated 120+ hours of future debugging time based on bug rate trajectories. This aligns with research on technical debt management showing that addressing debt early saves 4-6x the time compared to fixing accumulated issues later.

 

Team Work

 

Key Takeaways from Our Experience

 

  • Value Over Volume: Metrics should reflect the impact of work, not just the quantity of output.
     
  • Prioritize Team Dynamics: Collaboration is a key driver of success and deserves attention in performance evaluations.
     
  • Tailor Metrics to Clients: Align metrics with client goals to ensure shared success.
     
  • Context Matters: Numbers without context can mislead. Always pair metrics with explanations.
     
  • Balance Innovation and Delivery: Dedicate time to technical improvements, even if it impacts short-term metrics.

 

Conclusion

 

At Glinteco, our journey in measuring performance metrics has been one of continuous learning. By adapting our approach to focus on value, collaboration, and client goals, we’ve created an environment where metrics guide improvement rather than dictate performance.

 

If you’re managing development teams, our advice is simple: don’t let the numbers define your team—let the stories behind them drive your decisions.

Tag list:
- Improving software development team productivity
- Development team performance metrics
- Aligning metrics with client goals
- Tracking success in software development teams
- Challenges in tracking development team performance
- Collaboration metrics for remote development teams
- Contextual performance metrics for developers
- Lessons from Glinteco on team performance
- Measuring team collaboration in software projects
- Best practices for evaluating developer performance
- Technical debt reduction metrics
- How to measure team performance in software development
- Agile metrics for development teams
- Effective performance metrics for developers
- Innovation vs delivery in development teams

Subscribe

Subscribe to our newsletter and never miss out lastest news.