The AI Landscape: Navigating Innovations, Ethics, and Global Implications
Welcome back to the second issue of Artificially Intelligent, the newsletter that dares to make sense of the sprawling, often enigmatic world of Artificial Intelligence. We’ve got a line-up that goes deep—Edinburgh’s upcoming supercomputer that's slated to push the boundaries of AI, healthcare, and clean energy; the latest U.S. policy move tightening the leash on AI chip exports to China, stoking the coals of geopolitics; and Google’s decision to wade into the murky waters of AI-generated content and copyright law. For those who find the lingo a labyrinth, we've compiled a glossary to light your way. And don't skip our watch-list—it's a curated collection that offers perspectives you won't want to miss.
Let's pull back the curtain and dive in, shall we?
On today’s menu
👩💻 Edinburgh to welcome world’s most powerful supercomputer
⛔️ US to Tighten Restrictions on AI Chip Exports to China
📑 Google Vows to Defend Generative AI Users Against Copyright Lawsuits
🎤 The local lingo
👀 🎧 What we’re watching and listening to
Newsroom
Edinburgh to welcome world’s most powerful supercomputer
Edinburgh has been selected to host a next-gen exascale supercomputer that will be 50 times more powerful than the UK's current top system, ARCHER2. The facility will accelerate research in AI, medicine, and clean energy, and is part of a £900 million UK government investment.
This new computing powerhouse in Edinburgh isn't just a tech marvel; it's a launchpad for future innovators. With capabilities to fast-track research in artificial intelligence, healthcare, and sustainable energy, the opportunities for cutting-edge research and high-skilled jobs will multiply. Think of it as laying the groundwork for the questions you'll be answering in your future careers.
US to Tighten Restrictions on AI Chip Exports to China
The U.S. is updating its restrictions to block more AI chip exports to China. This move aims to prevent American chipmakers from selling semiconductors to China that circumvent existing government regulations. These new rules will specifically target some AI chips that fall just under the current technical parameters and will require companies to report certain shipments. The latest measures come amidst ongoing efforts to mend U.S.-China relations but are designed to keep American technology from benefiting China's military.
With the U.S. government tightening the screws on AI chip exports to China, there's a clear signal of the strategic importance of this technology. For anyone considering a career in tech, this not only highlights the value of specialising in areas like semiconductor technology and artificial intelligence but also shows how closely linked tech and geopolitics are becoming. This can be a double-edged sword, offering opportunities for innovation but also subjecting the tech world to increased scrutiny and regulation.
Google Vows to Defend Generative AI Users Against Copyright Lawsuits
Google has announced plans to defend users of its Cloud and Workspace services against intellectual property lawsuits that arise from the use of generative AI. This move comes as issues surrounding copyright infringement in AI-generated content heat up, prompting similar commitments from other tech giants like Microsoft.
Google's promise to back users against copyright claims for AI-generated content is a significant milestone in the evolving landscape of AI and copyright law. For creative professionals using AI tools, this announcement offers a layer of protection, as long as they follow Google's responsible use guidelines. However, it's crucial to remember that this indemnity does not absolve users of all responsibility; intentional infringement still falls on the individual. Thus, while the announcement makes AI tools more accessible and less risky for creatives, it also serves as a reminder of the ethical considerations that come with the power of generative AI.
The local lingo
Jargon, there’s a fair amount of it, lots of acronyms and terms which might not (yet) have much meaning. But below you will find 10 terms to help you better understand what’s in the news at the moment.
Artificial Intelligence (AI)
Artificial intelligence (AI) is human-level intelligence that computers are able to simulate: the power to think, learn, and make decisions almost like we do.
Artificial General Intelligence (AGI)
Referring to a type of AI that would have the ability to understand, learn, and apply knowledge in a way that's similar to how humans do, in an all knowing way. Unlike most AI systems we see today, which are designed for specific tasks, AGI would have the capacity to perform a wide range of tasks, learn from experience, adapt to new situations, and think creatively.
For obvious reasons this is a huge cause of concern, but in reality we are decades away.
Algorithm
Computer algorithms are sets of instructions for computers to follow in order for the computer to complete certain tasks.
Bias
Generative AI models learn from vast amounts of data, including sources like video transcripts, historical books, and websites. Sometimes, the information in these datasets can be biased, meaning it might favor certain groups of people and not treat everyone fairly. As a result, the responses generated by these AI models may unintentionally reflect this bias.
Deep Learning (DL)
Deep learning (DL) refers to the branch of machine learning concerned with neural networks, usually consisting with several ‘layers’. These layers help the computer recognise arbitrary patterns in really big sets of data.
Machine Learning (ML)
Machine Learning is like the 'smart learning' part of AI. It's all about teaching computers to get better at things by themselves.
Think of it like training a pet, but the pet here is a computer. You show the computer lots of examples or data, and it learns from them. It's like teaching a computer to recognise your friends' faces in photos or suggesting what music you might like to listen to.
And the fascinating thing is that the computer can continuously improve at these tasks, even if it's something it's never seen before.
There are different ways of teaching algorithms, like 'Supervised Learning' where we show them examples, usually referring to when data they are trained on is labelled. 'Unsupervised Learning' is another way, where algorithms must figure things out by themselves. It's a bit like playing with different methods to see what works best for different tasks. Finally you have reinforcement learning
Neural Networks
Neural networks are computer algorithms inspired by the intricate wiring of the human brain. These networks consist of interconnected layers of nodes that work together to process and analyse data.
They are widely employed in various fields, including image and video recognition - where they help identify objects and patterns in visual data. In the realm of natural language processing (NLP), neural networks enable machines to understand and generate human language, making chatbots and translation systems possible.
Natural Language Processing (NLPs)
Natural Language Processing (NLPs) explores the ability of computers to process and interpret data sets of natural language, with the ability to pick up on the small nuances we often don’t even think twice about and structure it. This processing ability is what chatbots leverage and has dramatically accelerated in recent years.
Parameters
The "power" of an AI model depends on a combination of factors, including the number of parameters, the quality of the training data, the efficiency of the algorithms used, and how well it's fine-tuned for a specific task. In some cases, a smaller model with clever tuning might outperform a larger model.
Tokens
When the AI algorithm has to process text data, it can be more efficient to decompartmentalise sentences into what are called tokens, these are essentially the words in your prompt. Say I wanted ChatGPT to ‘Give me some easy recipe ideas’, it will split your sentence into six tokens (words).
In generative AI, when a model is said to handle a certain number of tokens, it is referring to how many words it can handle within its prompt line. This is important to bear in mind when choosing generative AI chatbots as some can handle more than others.
What we’re watching
The inside story of ChatGPT's astonishing potential
In a talk from the cutting edge of technology, OpenAI cofounder Greg Brockman explores the underlying design principles of ChatGPT and demos some mind-blowing, unreleased plug-ins for the chatbot that sent shockwaves across the world. After the talk, head of TED Chris Anderson joins Brockman to dig into the timeline of ChatGPT's development and get Brockman's take on the risks, raised by many in the tech industry and beyond, of releasing such a powerful tool into the world.
What we’re listening to
Open Sourcing AI: Accelerating Progress or Opening Pandora's Box?
Unlike closed models dominated by Big Tech firms, open source AI is transparent and accessible to all. Initiatives like TensorFlow, PyTorch and Stable Diffusion show the power of open collaboration and innovation. However, there are debates around risks of open access versus limiting creativity and progress. Overall, open source promotes democratisation and appears pivotal to the ethical development of AI.
We're in the business of dialogue, not monologue. Don't be shy—hit us up at artificiallyintelligent00@gmail.com with your queries, thoughts, or even some good old-fashioned constructive criticism. And hey, if you think we're onto something, don't hesitate to share this with your circle.
Happy Monday,
Cam and Ed