Session type:


Presented by:

John Wilson

Session time:

16 May 16:35 17:20

Session duration:

45 minutes

About the session

55% “faster coding” claim AI tools such as CoPilot and ChatGPT.  However, these tools, based on probabilistic models, 'guess' their responses. The minimum feedback on whether code actually does what we want it to do is rapidly lengthening out to “from prompt to production”.

This talk will demo a program that leverages OpenAI’s GPT-4's function calling capability, allowing it to compile, run, and test its generated code. By coaching GPT-4 in TDD, feedback loops are re-shortened, encompassing compilation, test failures, and refactoring cycles.

This approach ensures that AI-generated code fulfils intended behaviours, restoring trust in the development process. It represents a shift from simply optimising developer typing to leveraging AI for effective and accurate code development.

Participant takeaways:

  • Test Driven Development practice in general.
  • Current capabilities/limitations of IDE coding assistants such as GitHub Copilot & SourceGraph Cody.
  • Current capabilities/limitations of chatGPT to write code and tests.
  • An insight into how LLM models work in terms of tokenisation and how this relates to cost and GPU power.
  • Function Calling as a specific capability of the OpenAI GPT API.
  • How the SLDC will be transformed by AI in the future.



About the speaker(s)