Insights

Has your organisation embraced Generative AI code assistants?

By David Sugden, Head of Engineering

19/12/2023

Many organisations have established Developer Experience (DX) teams, tasked to look at what drives developer satisfaction and productivity within their company, and taking a deep dive into a number of dimensions spanning organisational culture, tools, and processes.

And with good reason.

Mature organisations are reaping the benefits from optimising the ‘how’ of software delivery and driving ‘fast flow’, focussed on outcomes over output.

The HR department loves happier teams and employee retention, and how this leads to a reduction in recruitment costs. The PMO function enjoys driving efficiencies; the CFO identifies with getting more value from existing teams and a quicker time to market; and the CEO is no longer being kept up at night wondering why she has a slow and inefficient organisation.

Back in 2021, the Good Day Project identified metrics that helped their engineers define ‘flowing days’ and ‘disrupted days’ – an engineer losing their state of flow saw that day’s productivity drop to just 14% and their quality-of-work dropped off also. The study identified multiple factors that drive flow state, including the need to balance 'good' and 'bad' interruptions.

Over a similar timeframe, Generative AI developer & code assistants have become integral to many organisations’ developer toolkits and in the right conditions can drive a reduction in time-to-market, an increase in quality and security, and continuously maintainteam morale.

It should come as no surprise that as we come towards the end of 2023 it is reported that over 90% of developers are currently using AI coding tools inside and outside of the workplace, and naturally the landscape of tools has been constantly evolving.

With many organisations fearful of losing ground on competitors, and with their engineering teams constantly pushing the door on what’s possible, CTOs are exploring the benefits of coding assistants.

As a keen advocate for Developer Experience and raising the bar on engineering excellence, I’ve recently been looking at the factors to determine which tool offers the most effective proposition and value for money.

But more of that later. Firstly, let’s discuss some of the benefits.

Benefits of Generative AI Coding Assistants.

As we know, the complexity of development work varies wildly. Step forward tools that can reduce the repetitive work, and in so doing can create more time and mental bandwidth for creative, higher-value and more complex tasks. Allowing an opportunity for rapid development based on natural language documentation, code block autocompletion, and boiler plate implementations allows developers to focus their time and effort on the ‘last mile’.

A number of recent studies have concluded that up to 2x efficiencies can be gained by developers on these less complex tasks when they are using a Generative AI code assistant.

Engineers also report the improvement of documentation, which aids understandability and readability in the longer term. Tools achieve this both retrospectively and proactively; firstly, for legacy code, they help explain selected modules, classes, or code blocks and can generate natural language descriptions of its functionality and purpose. And for new code, as part of the code generation cycle, developers will write a comment block that describes the intended functionality in natural language, and the coding assistant will autogenerate code blocks – thus promoting ongoing code documentation as part of all new feature development.

Naturally, the use of tools helps to improve the overall quality of the codebase. Just as with linters and extensions that integrate to code quality tools, being able to identify, highlight, and resolve common coding errors will improve the code quality from both individual contributions and more generally across the codebase. Of course, a reduction in bugs and coding errors ensures that issues are detected before they can become a problem, thus saving time to debug later – the sooner issues are found the cheaper they are to fix, so engineering teams are constantly striving to shift left and reduce feedback loops.

And where a bug has already leaked, these tools can help identify the root cause and will propose fixes.

It’s not only in the coding of business logic that these tools come to their fore. They are also adept at generating complete and complex test cases, suggesting input parameters and expected output values based on the method signature, code, context, and documented functional intent. This includes edge cases, boundary conditions, null checks, and other conditions that might be difficult to identify manually.

Finally, developers can use a Generative AI coding assistant to identify and suggest fixes to security flaws in code blocks – the tools achieve this by making recommendations based on examples found in the training set that did not exhibit the same flaw, pushing code iteratively through SAST scanning engines.

With just a small taster of some of the benefits, it’s little wonder that developers using Generative AI-based tools in their workplace have been found to be twice as likely to report overall happiness, fulfilment, and a state of flow.

What should we be concerned about.

What about concerns, what might you need to be worried about when adopting Generative AI code tools. Firstly, let's dispel the myth they are replacement for humans.

Three areas to highlighted and first of these will be security risks; specifically, whether code can expose sensitive information or introduce vulnerabilities. You should always review and test the generated code thoroughly. Security vendors provide IDE extensions that will identity security flaws and suggest fixes.

Secondly, consider whether the code that trained the model was allowed to be used for such purposes. There are possible legal exposures to licences as well as possible matches with public code or code that was in the training set. You should always take the same precautions as you would with any code that you did not independently produce, including precautions that ensures its suitability.

Finally, suggestions may appear to be valid but may not actually be semantically or syntactically correct (aka "hallucinations"). Additionally, code may compile but it may not accurately reflect the intentions of the developer. You should carefully review and test the generated code, particularly when dealing with critical or sensitive applications, and ensure that code adheres to your best practices and design patterns, architecture, and styles.

With these in mind let's go back to the dimensions of an assessment.

Assessing & Comparing Tools

While not extensive, this is a set of core factors for organisations to focus in on when determining which coding assistant tool to adopt.

  • Supported IDEs

It is important that the tool integrates into the common development environments (IDEs) such as VS Code. The extent to which there is a seamless integration will impact on workflow efficiency, with less context switching and ultimately a more productive coding experience for the developer.

  • Supported Languages & Frameworks

While there are several dedicated tools for niche languages, organisations are more likely to favour tools that support a wide range of programming languages and frameworks, rating the importance of versatility. Tools that support developers with multi-language projects are likely to be more powerful and valuable, especially with the prevalence of diverse development scenarios, multi-stack services, and monorepos.

  • Core Features, Accuracy & Relevance 

Given that these tools are primarily used to suggest code snippets and solve problems, then the accuracy and relevance of the suggestions is paramount. The tools should be able to understand the problem and provide relevant and syntactically correct solutions – this includes minimising ‘hallucinations’ and providing code in the wrong language.

Similarly, the range of features offered is important – being able to analyse code and generate test cases or document and explain the function of complex code is a key requirement.

  • Security & Privacy 

Code suggestions can potentially expose sensitive information or introduce vulnerabilities – some tools provide security scanning capabilities for suggestions, and organisations should assess how highly they rate these additional features. Equally, and especially in a corporate environment, the tool must handle code and data securely, ensuring that code does not leave your environment and that proprietary data is not leaked back into training the model.

  • Legal & Compliance 

While tools generate new code in a probabilistic way, a suggestion may match code in the training set. Models that are trained on permissive open-source repositories will minimise the risk of legal and license concerns, as will checking suggestions against public and open-source repositories for matches. Finally, the capability to filter out suggestions that resemble open-source code will provideadditional reassurance.

  • Cost & Licensing 

The cost of any tools may be a deciding factor for many organisations, especially those with small teams, and start-ups. While many tools offer free options for individual developers, most are priced on a per-user/seat basis.

In addition to these themes, organisations should also consider response time – that is, the speed at which the tool provides back suggestions – documentation and/or community knowledge, how frequentlynew release and bug fixes are rolled out, support agreements, and so on.

Want to know more about what digital evolution means for your organisation?