Last week we held our first Engineering Leaders Forum for 2024. Engineering Leaders is an invite-only intimate event where we invite engineering leaders to discuss with peers what keeps them up at night, exchange ideas, learn and share.
We held roundtables on two topics near and dear to the hearts of these leaders: Measuring engineering organizations and strategy and implementations of artificial intelligence. We summarized the key highlights below.
What metrics do you look at
We were wondering how many organizations use DORA or some other framework and what metrics they look at to measure and improve their organization.
- Any single metric as a target is going to create wrong incentives and give an imperfect picture. Instead, choosing several different metrics that ensure incentives are balanced — an example we use at DevZero is time to get a workspace vs. workspace reliability/success. If focused only on decreasing time to get a workspace, could incentivize hacks that mean workspaces are less complete and usable. By focusing on both metrics, you mitigate risk of overfitting and leaving the end UX in a worse place.
- A long discussion of stability vs. throughput as discussed in Accelerate, a book that surveyed over 2000 companies to understand how they measure productivity. A key conclusion was that these two measures are actually interconnected and influence each other.
- Finally, we spent time discussing testing — unit vs. integration testing and how 100% test coverage added after the code is created could just be codifying existing bugs/behavior that it’s meant to prevent. Instead, the group came to the conclusion that TDD (test driven development) can be a good way to ensure tests are less influenced by bad code. TDD can also be a great way for teams to work together and act as a contract on how disparate systems will interact with each other before teams start working on the solution independently.
What is your organization doing around artificial intelligence
In last year's ELF look of engineering leaders expressed their hassitations around using AI. We wondered what has changed.
- About half of the companies have developers on their team that use AI code assistants (in all cases Co-pilot). Takeaways on productivity:
- For junior developers - It's a game changer. They love it
- For Senior developers - They hate it and don't use it.
- Many of the leaders think of co-pilot as another tool that they can give developers to improve their experience and satisfaction and less as a productivity tool. They actually don't think there's a direct productivity gain hence won't buy it if it was more than a few bucks a month. The productivity gain is more as a result of the improvement in satisfaction.
- The main challenge with co-pilot today: It lacks context so it works well on basic stuff like APIs
- IP - While it was a big concern a year ago, it didn't seem to be as such this time. Companies need to protect production data, while recognising that it is needed to make AI more contextual but codebase access was not a big concern.
- Main concern around AI - AI will reduce developers accountability, impacting code quality.
I'll leave you with one final thought that we left this forum with: Will in 5-10 years engineering organizations shrink or will we still see organizations of hundreds and thousands of engineers?
What do you think?