What happens when Robots can program themselves?

What happens when Robots can program themselves?

Photo by Joan Gamell / Unsplash

Welcome to the Robot Remix, where we summarise the week's need-to-know robotics and automation news.

In today's email -

  • Ferrari’s not keen on self-driving
  • French soldier bots
  • What's the point of humanoid robots?
  • Self-programming AIs

Snippets

Drug dispensing origami - Stanford engineers have developed a “wireless amphibious origami millirobot” which can speedily travel over an organ’s slick, uneven surfaces and swim through body fluids, propelling itself wirelessly while transporting medicines.

Ferrari rejects self-driving - the automaker’s CEO has declared that although they are embracing electric vehicles they will never develop full automation. “ No customer is going to spend money for the computer in the car to enjoy the drive,”

France explores military robots - the French army has kicked off Vulcain, a project to investigate whether robots can provide increased mass and endurance on the future battlefield. They have no intention of procuring fully autonomous lethal systems but are interested in semi-autonomous lethal weapons  — where there is always a human decision-maker in the loop.

Cat and mouse computing - Scientists from Tsinghua University have developed a chip inspired by the human brain which is capable of multitasking without high energy consumption. This is a key development if control hardware is to keep pace with advanced software algorithms. They tested the system by training two robots to chase/evade each other.

Unconscious computers - Last week we discussed how Google’s LaMDA algorithm tricked an engineer into believing it was conscious. Turn’s out they’re developing a language model called PaLM that is nearly 4 times more powerful and capable of “logical reasoning”. The Atlantic explains some of the unexplainable and strange properties that the PaLM is exhibiting. Still isn't sentient, though.

The Big Idea

What happens when Robots can program themselves?

OpenAI has recently demonstrated a new algorithm called “Evolution through Large Models” (ELM).  ELM combines genetic programming with Large Language Modelling to create a program able to generate code and then optimise it to meet specific goals. This is a bit of a mouthful so let's break it down to understand what this means and why its important.

Large Language Modelling (LLM) - These are AI systems that analyse and generate text. They are trained on large data sets (in the 1000’s of terabytes). LLM’s are the hot new thing in AI and have been popular topics of this newsletter, where we have discussed examples such as GPT-3 and Lambda. One of the most exciting LLMs is OpenAI’s Codex, which is able to write programs from basic written descriptions. Codex is already being used by programmers as “autocorrect on steroids”, helping coders by completing whole functions, or auto-filling repetitive code as they type. Although impressive, the process requires a lot of human debugging.

Genetic Algorithms -  take inspiration from evolutionary biology to find an optimum solution to a problem by randomly sampling the design space to produce concepts and then evaluating how each performs. The best designs are then combined into new generations over and over again until the target objective is met. This can be used to optimise mechanical designs or even teach a neural network to smash a level of Super Mario.

Genetic Programming (GP)  - A subgenre of genetic algorithms which aims to write entire software programs. There are two major failing of GP approaches  -

  • Not open-ended - applying GP to a new domain requires significant expertise. A user must explicitly specify what functions, variables, and control structures are available to the program, limiting the search space. In contrast, a human can open-endedly decide what libraries to import and how to write many interdependent subroutines or classes.
  • Randomness -  nearly all GP algorithms randomly change the code, which can have a large impact on its functionality and complicate the search. In contrast, humans learn to reason with code in its full complexity and can make much more targeted and efficient modifications.

Evolution through Large Models - combines Large Language Modelling with Genetic Programming. The explanation gets a bit technical so if you’re bored skip to the results.

The ELM has three main components, which operate in a nested loop. These are -

  • First, the LLM is trained by examining examples of changes made to code by human developers. This is done using a diff model, where the training data supplied is the difference between version changes of code — similar to the operation of such versioning control platforms as GitHub. This LLM is used to intelligently change (or ‘mutate’, in GP jargon) a baseline program. By using an intelligent model at this stage, the modifications made to the code are more logical and thus avoid the inherent randomness of previous GP methods.
  • The mutation is then inputted into a genetic algorithm which decides what mutations to try next. Lots of different genetic algorithms exist and they are largely plug-and-play so any one of the leading algorithms would probably work. As a quick aside one of our favourites simulates a game of  hide and seek.
  • Finally, the results of the genetic algorithm are used to train the LLM of Step 1. Meta, we know, but it checks out. The system can learn even more by comparing what changes led to functional code outputs for the current problem. The researchers call this “learning to think about change”.  Whilst they’re quick to highlight that this may bias the model to specific problems, it has one fascinating conclusion: that “practice makes perfect” now holds true for computers too.

The Results

The main focus of the work was to ‘invent’ moving robots, as shown in the gif above. To do this, the ELM program was given some basic Python3 codes (or ‘seeds’) which would produce barely functional designs. They weren’t very good. The code then modified the original scripts, eventually arriving at the innovative and unexpected solutions at the top of this article. The amazing part is that the code is still easily readable and changeable by humans, meaning it could be quickly integrated into larger projects or system architectures.

The full paper is jammed with really interesting case studies and advantages. Our favourite was the ELM's ability to fix bugs in a code compared to traditional GP approaches. They found that if any more than 3 bugs existed, the GP algorithm couldn’t fix it — even with 100,000 mutations. In contrast, the new method successfully fixed 5 bugs in the code. Where has this been all our lives?

Implications for Robotics

This has huge implications for robotics as it could help accelerate solutions for the industry's biggest challenges. We’ve previously discussed how Deepmind uses digital models and simulation to train robotics for complicated tasks. ELM could be used to rapidly accelerate this process with the added advantage that the resulting code is easier for a human to read and verify. We’d like to see this approach applied to flexible/reconfigurable robots, bipedal walking on rough ground,  key insertion and manipulating conformal materials to name a few.

In the long term, this brings us closer to one of the biggest milestones in artificial general intelligence - the ability for AIs to write and edit their own code. Once they become recursive their rate of improvement will reach a tipping point - the better they get, the better they will get at improving their own code. On this trajectory, there is no reason to believe algorithms won't surpass human code writing capabilities. As in all things AI, this could be wonderful or it could be awful. An AI that can edit its own code could remove any alignment or safety protocols included by humans, leading us down the path to the ‘AI bear’ scenario. Next week Westworld Season 4 is coming out and we’re planning to write about AI alignment - let us know if you like this idea!

Video

Why bother with humanoid robots?

Check out this entertaining review of humanoid robots. Highlights include -

  • Why pop culture is filled with negative examples of humanoid robots and yet we work tirelessly to create them
  • Why the humanoid form is worth copying
  • Why Tesla developing a robots makes sense even if they don't succeed anytime soon

GIF of the Week

We talk a lot about how automation can help us perform tasks. Dragan Ilic, a visual artist, has decided to invert this paradigm -  he’s helping the robots to make art.

If This Robot Spinning A Serbian Man Isn't Art, I Don't Know What Is

This project is topical for us here at Remix:  we’re currently developing an automated painting solution. We’re not strapping the interns to it just yet, though.

Jack Pearson

London