What does the acceleration of AI development mean for us?

Many of the things we’ve become familiar with use AI.
In 1640 Blaise Pascal invented the first digital calculating machine and some inventions that rely on “artificial intelligence” go back a lot further than this.

The explosion of new apps since Microsoft invested $10 Billion in OpenAI, maker of ChatGPT which was launched in November 2022, has both excited and dismayed the world. Some see huge opportunities and some see the destruction of jobs. Some even see the destruction of humanity.

In this Ask the Expert event, we invited Jo Stansfield to give us her views.
Jo is the founder of Inclusioneering a social enterprise specialising in diversity, equity, and inclusion (DEI) solutions and consultancy for technology and engineering organisations. Jo has been a member of ForHumanity for many years. For Humanity is a not-for-profit public charity that examines and analyses the downside risks associated with the ubiquitous advance of AI & Automation to engage in risk mitigation and ensure the optimal outcome … for humanity.

This short video is Jo’s introduction to the topic:

Jo shared this information about how ubiquitous AI is in our lives

In breakout rooms our members discussed how ChatGPT and many other AI apps have given them free, easy to use tools to create written content, videos, images, music, and much more.

The search formats of Bing’s Chat GPT and Google’s Bard have brought new dimensions and concerns to how information is accessed. This has many implications for SEO and for the huge search and advertising business (estimated at over $500 Billion) that Google has almost exclusive power over. When Samsung decided to use Bing as the default search engine on their latest mobile phone it prompted Google to accelerate the development of Bard to compete with ChatGPT.

There are also concerns about how information is being gathered and used to advance machine learning and how we can protect our original creations and both give and get credit where it’s due.

The biggest concern is about accuracy and bias.

Jo’s reflections on bias:

  • Some are built in in the training data  — the quantity of data needed is huge.  Mitigate with better training data?
  • Some are in design — limitations of what the design team consider.  Mitigate with diverse teams.
  • Some are introduced by the algorithms continuing to learn  — Mitigate with regular evaluation and review of performance, with means to rectify learned bias.
  • Some are about self-reinforcement of bias cycle – e.g. ShotSpotter
  • Some are by application not being valid for its deployed use
  • Some are about access

We just scratched the surface of this hugely complex topic and are very grateful to Jo for giving us a glimpse of how things are developing so rapidly.

Find out more and get involved

 

 

 

 

 

 

To get in touch with Jo Stansfield

Follow Jo on LinkedIn for regular updates of the work she’s involved in.