AI hype - Raining on the parade

AI hype - Raining on the parade

Photo by Harshit Sharma / Unsplash

Welcome to the Robot Remix, where we summarise the week's need-to-know robotics and automation news.

In today's email -

  • Hype, hype, hype
  • Robots master soccer and ping pong
  • Never fold laundry again
  • Learn the future of the industrial robotics industry
  • A tough week for vehicle start-ups

Snippets

Stable growth- Last week Stable Diffusion announced a $101 million fund-raise at a $1B valuation for their seed round. The hype could not be higher for this company, they appeared out of nowhere and are gaining an unbelievable amount of traction - see the graph above. What sets them apart? They are serious about making their models 100% open source. To them, it’s the only option to block AI inequality, critics aren't so sure. (News)

Robot goals - MIT have used a Deep Reinforcement Learning (DRL) approach to teach their robot dog to be a football goalie. The robot is able to assess the ball’s trajectory and can dynamically sidestep, dive, and jump to block the ball on its way to the goal. Maybe we’ll have a robot team at the next World Cup? (Research)

The gravity of the situation - Scientists in the Netherlands have figured out how insects detect gravity and have implemented it in a drone. Insects will stop flying for a moment and use optical flow to detect their motion relative to their environment. Why copy this in drones? The approach is purely visual and removes the need for accelerometers saving space, weight and power. (Research)

Ping pong robots - Google scientists have taught a robot to play ping pong using DRL. Another flashy sport demo? Yes, but there’s a good reason - it’s a great way to model fast-paced and relatively unpredictable human behaviour. The project simulated expected human behaviour and refined its model with actual practice, iterating between both until it was able to achieve a 340-hit rally. (Research)

Tough times UK- Arrival is restructuring its business for the second time in six months as it tries to become more capital efficient. It recently reported that it would focus on moving development to the US as it targets electric vans for the American market. This builds on announcements of budget cuts and redundancies in the UK. It’s a big blow for the company and the British robotics industry. (News)

Tough times USA - Ford and VW-backed self-driving car company Argo AI is shutting down. The company's most recent valuation was $7.2B and the announcement is said to be a surprise for employees. Ford said it has made a strategic decision to shift its resources to developing advanced driver assistance systems, and not autonomous vehicle technology. (Breaking News)

Tesla under criminal investigation over self-driving claims - Sources have claimed that the U.S. Department of Justice is examining whether Tesla misled consumers, investors and regulators, by making unsupported claims about its driver assistance technology’s capabilities. This investigation has a possibility of criminal charges against the company or individual executives. The self-driving industry is going through a bit of a rough patch.  (Breaking News)

The future of industrial automation?- Formic’s business model has been analysed by Not Boring. Although it's a paid plug, there is a lot to dig into - why automation is a necessity, all the flaw’s in the current value chain, Formic’s clever way of funding developments and a lot more. This is required reading for anyone working in industrial automation as it makes some strong predictions for how our industry may get shaken up…(Long Read)

The Big Idea

AI hype - Raining on the parade

If you’ve been following tech Twitter you will have seen the buzz and excitement around AI - it's everywhere right now. The near-magical success of Large Language Models (LLMs) in generating text and images has whipped everyone into a frenzy. With some high-profile fund raises - Stable diffusion $100M (see above), Anthropic $580M and Inflection AI $225M – VCs are jumping on the bandwagon in fear that they’ll miss out.

We’re as excited as anyone else about the recent developments but it's important to ground ourselves. AI more than any other industry progresses through hype cycles. AI hype has been followed by AI winter since the 60s and you really have to get worried when people keep saying AI is the new crypto…

The recent growth spurt has convinced a lot of people we’re at a tipping point and that AGI is finally only a few years away. Why is this an issue? The greater the hope, the greater the disappointment when expectations aren't met - again look at crypto.

So let's pump the breaks a little. Why might we be getting a bit ahead of ourselves?

Right now AI is basically synonymous with Deep Learning, it's the approach used in nearly every example of AI we can think of. As a reminder, Deep Learning is a flexible statistical model which learns the relationship between input and output training data and varies its own parameters until it can output the correct value for a given input. The more parameters, the more flexible the model and the better it performs. The innovation that lead to the AI explosions in the 2000s was increasing the number of parameters – this is where the "Deep" in Deep Learning came from. In Deep Learning, bigger is better. If you want a more in-depth review – check out our series on Deep Reinforcement Learning.

One of the challenges of this approach is that - Deep Learning models are overparameterized, which means they have more parameters than there are data points available for training. The larger the model, the more training data required and the more computing required.

The most recent boom in AI has followed this trend. Although there have been some innovations in algorithm design, dev ops, hardware, etc the real light bulb moment came when developers asked - “what happens if we feed our model ungodly amounts of data?”. They made Language Models Large, hence LLMs.

This approach had a phenomenal impact on model performance but begs the question – is it sustainable? The first concern is where to find more unique data as the majority of public text data has been exhausted. The second is the sheer cost. (tangent- there is a lot of rumour going around about GPT-4, apparently, it is even more mind-blowing than 3…)

Some basic stats theory - To improve model performance by a factor of k, at least k^2 more data points must be used to train the model and k^2 more computation. As Deep Learning models are over parameterised, this becomes k^4 more data and increased computational energy/cost. In theory, a 10-fold improvement requires a 10,000-fold increase in computation.

You know what they say about theory and practice… IEEE Spectrum analysed 1,000 research papers on Deep Learning and found that in practice the computational energy scaled with the power 9. A 10-fold improvement requires a 10,000,000,000 fold increase in computation. This is rather large. These costs are already an issue, –

  • When  DeepMind trained its system to play Go, it was estimated to have cost $35 million.
  • When OpenAI, trained GPT-3 it cost more than $4 million. In that process, they made a mistake that they did not fix due to the cost of retraining

This doesn’t even include the infrastructure required. When Microsoft and Nvidia created the Megatron -Turning NLG Language Model with 530 billion parameters, Hugging face estimated they must have spent $100M on GPU servers networking equipment and hosting costs. In summary not very sustainable – financially or environmentally (see the graph below from IEEE Spectrum).

Will AI continue to improve at the rates we’ve seen? If it follows the same approach probably not. To quote IEEE Spectrum –

“Faced with computational scaling that would be economically and environmentally ruinous, we must either adapt how we do deep learning or face a future of much slower progress.”

There are solutions –

  • Improve computing hardware – the move from CPUs to GPUs to highly specialised ICs has brought efficiencies and new innovations may also bring improvements
  • Fine-tune models – Don’t start from scratch but fine-tune a more general model to meet your specific needs
  • Pruning models - Remove model parameters that have little or no impact on the predicted outcome
  • Fusion- Merge model layers
  • Quantization- Storing model parameters in smaller values (say, 8 bits instead of 32 bits)

The issue is that so far these have only brought incremental improvements and haven’t made a dent in the figures we’ve discussed.

There is another option. The issues we’ve listed are specific to Deep Learning. Deep learning has brought phenomenal progress but It does have flaws that seem challenging to overcome. There is an opportunity for AI to expand away from purely statistical models into other forms of machine learning. This could be a return to symbolic AI or something completely new.

Interesting Investments

Cyberdontics raised $15M – They are developing a robot dentist capable of extremely accurate tooth cutting. The system is built for speed with root canals taking under a minute. This is both awesome and terrifying.

Generally Intelligent raised $20M - The “independent research company” is aiming straight for General AI. They plan to study the fundamentals of human intelligence in order to engineer safe AI systems that can learn and understand the way humans do.  This may sound overly ambitious but it is founded by the “legendary” John Carmack. Their investors have pledged a further $100M based on progress.

Ambi Robotics raised $32 million - They are building an AI-powered parcel sorting system that is able to pick random parcels from bulk. Their product has impressive results- 1,200 items safely sorted per hour with more than 99% accuracy.

Outsight raised $22M – They to improve their LiDAR-based 3D Software solution. Their software simplifies the implementation of lidar hardware and moves the processing to the cloud.

Video

0:00
/
https://twitter.com/i/web/status/1582454445337587713

Folding crinkly laundry is a pain for people and even more of a pain for robots. Unit now.

Why is it so hard to automate? Crinkled fabrics can lie in a near infinite number of poses and have complex non-linear dynamics making it a challenge to train for.

Great news - A team from Berkeley seems to have cracked it. Their robot can fold 30-40 garments per hour. Check out the video to learn how you can avoid more chores with robots.

Meme of the week

Jack Pearson

London