Daredevil: Programming with Google's AI Assistant Bard – a Practical Report

What are the new programming assistants good for? For this practical report, Google's Bard is let loose on microcontrollers and has to show what it can do.

In Pocket speichern vorlesen Druckansicht 2 Kommentare lesen

(Bild: welcomia/Shutterstock.com)

Lesezeit: 19 Min.
Von
  • Tam Hanna
Inhaltsverzeichnis

(Das Original dieses Artikels ist auf Deutsch erschienen.)

Hardly any other system makes developers as worried about their jobs as Artificial Intelligence (AI), which has been able to generate images and texts and code for some time now. For a small practical test, the author subjected Google's Bard system to an examination in the laboratory.

An AI system intended as an assistant for coding can only be meaningfully assessed if it is actively involved in a value-creating corporate process. The author's consultancy company programs mobile applications and embedded systems, and the programming test with Bard is therefore based on tasks from this area.

A Programmer's Report by Tam Hanna

Tam Hanna is an engineer who has been involved in programming and using handheld computers since the days of the Palm IIIc. He develops programmes for various platforms – including Firefox OS. His company, Tamoggemon Holding, is involved in consulting, application development and technical writing for the IT industry. Since 2004, Tam's focus has been on mobile computing. He has been following the industry since the Palm IIIc and has been featured in various magazines and at some conventions

The tests carried out here are done in two steps: Some tasks normally delegated to the lab technician were delegated to both Bard and the human helper on a trial basis. A comparison of the results produced with electrical and with human energy subsequently allows a quality estimation.

Due to the cyclical nature of the occurrence of the business tasks, but also for reasons of commercial secrecy, not all tests carried out here could be done under completely real conditions. For this reason, there are some synthetic tasks that did not occur in the test framework, but which, in the author's opinion, are representative of other tasks that must always be delegated to an assistant or potentially a programming assistant in a consultancy project.

Artificial Intelligence comes with controversy. One of the ways providers of various AI systems deal with the problem is by initially making their system available only to a strictly limited group of users – the roll-out only takes place when it can be assumed, based on initial feedback, that no disasters will befall the provider. In the case of Bard, at the time of writing, the service is in early access and is only available to UK or US IP addresses. European (but also Canadian) IP addresses are currently still excluded from access.

The first step to working around this problem is to obtain a suitable IP address via VPN. Then call up the URL of Bard and register your interest in joining the waiting list. Google acknowledges this with the window shown in Figure 1.

Registration with Bard was successful (Fig. 1).

(Bild: Google)

In the author's test, less than an hour passed before the account was activated. Reports in Anglo-Saxon sources, however, sometimes speak of waiting times in the daily range. It is therefore advisable to obtain a suitable IP address and complete the registration process as soon as possible if you are interested in Bard.

After successful registration, one receives a slot, which Google confirms by sending a corresponding e-mail. Then it is time to call up the URL again. When registering for the first time, Google asks you to agree to a general disclaimer, after which the virtual coder is ready for use.

It should be noted that Google does not deactivate the check of the country-specific IP address even after acceptance into the select circle of beta testers. If the VPN is switched off on a trial basis, Google denies the browser access to the Bard interface.

It should also be noted that Bard did not always function one hundred percent reliably at the time of writing. Time and again, error messages indicate an alleged lack of internet connection. However, in all cases, the request was immediately answered when it was repeated. Google may be limiting access to Bard with artificial rate limiting.

At least since the resounding success of the Arduino Uno and its green LED light connected to pin 13, the world of electronics programming has been enriched by a new "Hello World" programme. Because it's fun, Bard was given this as his first task. Figure 2 shows the results of Artificial Intelligence: Bard was able to generate the blink example successfully.

Bard successfully creates the Blink example (Fig. 2).

(Bild: Google)

Several details are interesting about the output shown in Figure 2. At the top right of the window, there is a combo box that the AI wizard uses to show three slightly different variants of its code in many of the test runs tested here. In the example shown here, however, the actual structure of the generated code was identical in all three cases. The only differences are in the rich documentation comments.

If you scroll the window further down, you will also notice that there are references to a GitHub repository and white papers in numerous instances. Bard thus tries to inform developers where it has obtained the information used to fulfil the test task at hand.

The next step is extending the example to include two light-emitting diodes to be operated alternately. Since Bard displays an input field under his answers, it is obvious to ask for an extension of the generated code in the next step - as shown in Figure 3.

Request to Bard to extend the previous example (Fig. 3).

(Bild: Google)

At this point, a weakness of Bard becomes apparent: if you keep drilling for alternatives with extensions or additions after a request has already been answered, you will get much worse results in numerous instances. The code shown here is firstly intended for Python and not for the Arduino (the includes reference to the Raspberry Pi), and secondly, the switching on and off the LEDs is not alternating.

What is most confusing about the present code or system behaviour is that an entirely new interrogation of the bot, as shown in Figure 4, results in a working light-emitting diode alternator.

A new request leads to functional code again (Fig. 4).

(Bild: Google)

Encouraged by the previous results, experiments with other platforms followed. The code generated for the ESP32 was given a quibble in that the prompt was directed to the ESP_ IDF process – as shown in Figure 5.

Bard also speaks IDF (Fig. 5).

(Bild: Google)

A quick analysis revealed that although Bard's programming is very wordy and verbose, the code works in principle.

The next step is to test with more exotic architectures, specifically the STM32.architectures, specifically the STM32 (a widely used architecture from STMicroelectronics) and the GD32VF, a very modern RISC-V MCU from GigaDevice. Here as well, the generated sample projects appear executable at least in a first optical smoke test (see Figs. 6 and 7).

LED flashing with STM32 ... (Fig. 6)

(Bild: Google)

For those who are not yet familiar with the term: this is a test in whichthe device under test (DuT for short) is connected to the power supply unit (PSU) andto the Power Supply Unit (PSU) and see if smoke rises.

... and with the GigaDevice GD32VF (Fig. 7).

(Bild: Google)

Finally, Bard was asked to add options to the GigaDevice sample project with additional options. Figure 8 shows that the generatedcode looks quite idiomatic: The use of the command GPIO_SetBits to set the light-emitting diodes is part of the advanced GPIO manipulation.

On the GD32VF, Bard idiomatically programs with advanced GPIO manipulation (Fig. 8).

(Bild: Google)

Although often all roads lead to Rome, some are shorter and less costly than their suboptimal counterparts. This is especially true for various numerical tasks.

Jack Ganssle, an embedded systems engineer and author popular in the electronics field, published the following passage in the 469th issue of his newsletter "The Embedded Muse", which recounts his experiences creating a CRC calculation routine: "I asked him to write a C program to calculate a CRC. The result looked pretty good ... but it was fundamentally wrong. It should have asked me what kind of CRC I wanted; instead, it produced code for the (awkward!) most common kind."

The author verified this in the next step by asking the AI system to generate FFT code. Fast Fourier Transform (FFT) is one of the hairiest tasks here: The various implementations can differ, especially regarding the host systems used for the execution, by a factor of 100 in terms of performance and memory requirements.

Figure 9 shows the result returned by Bard at the time of writing. It is obvious that the AI chatbot does not ask for more context or information about the structure of the target system (Floating-Point Unit, FPU) before "coding away".

Bard codes brutely without asking for further context (Fig. 9).

(Bild: Google)

Given Jack Ganssle's template, the next step is to animate Bard to generate a CRC algorithm. The reward for the effort is the rather simple code snippet shown in Figure 10.

Daredevil bot? Bard does not ask before assembling a basic CRC routine (Fig. 10).

(Bild: Google)

If a human lab technician were to try to accomplish the task set by his constructor in this way, the response would be a – usually friendly, but sometimes annoyed – enquiry

Bard knows ... (Fig. 11)

(Bild: Google)

Since, after all, Bard sees himself as a robotic substitute for the human lab assistant or helper, the next step was to ask two follow-up questions. It is interesting that Bard knew how to answer them competently, as the following two pictures show:

... quite a lot (Fig. 12).

(Bild: Google)