Apple has finally entered the AI race by building ChatGPT into Siri, and New York bans ‘addictive’ social media algorithms for kids

Apple has finally entered the AI race by building ChatGPT into Siri, and New York bans ‘addictive’ social media algorithms for kids
  • PublishedJune 11, 2024

Hello and welcome to Screenshot, your weekly tech update from national technology reporter Ange Lavoipierre, featuring the best, worst and strangest in tech and online news. Read to the end for a conspiracy theory about a bottomless ocean.

Ding dong, Apple has arrived at the AI party. So what did it bring?

Apple is finally making its big AI move, and it matters more than most flashy big tech launches for two reasons.

Apart from the fact of Apple’s market share, the company has been under pressure to catch up to its peers, Samsung, Microsoft and Google, which have all been faster at integrating artificial intelligence into everything that moves — whether or not you think that’s a good idea.

Overnight, chief executive Tim Cook announced that AI would be integrated into a range of Apple products, including Siri.

The company has even gone so far as to make a shameless bid for the acronym itself, branding the new features “Apple Intelligence” — which is a noteworthy detail when you consider Apple used to shy away from using the term “AI” at all.

The centrepiece is arguably Apple’s deal with OpenAI, building ChatGPT into Siri for those who opt in — promising a more sophisticated and personalised user experience.

Three senior Apple staffers sitting in grey armchairs on stage at a company event
Apple is branding its new AI features as “Apple Intelligence”.(AP: Jeff Chiu)

As for what exactly is changing beyond a revamped Siri, Apple said its AI features will “carry out actions between apps, as well as manage notifications, automatically write things for you, and summarise text in mail and other apps”.

At the same time, Apple is anxious to point to privacy protections it’s putting in place.

While some of the new features will work locally on the device itself, more complicated ones will send a user’s request to the cloud — meaning an external server.

Even so, the company says those servers will be well protected, and users’ data won’t be kept.

The features will apparently be available only on more recent models of its phones, tablets and laptops, and only when their language is set to English.

You can expect to see the changes rolling out in “beta” (to some if not all users) via software updates in the coming months.

Arguably the most ridiculed gadget in recent memory just hit a new low

At the other end of town, the Humane AI Pin, a wearable AI personal assistant, has warned its customers the battery case might catch fire.

We’re talking about this not because you have one — only 10,000 people in the world do — but because Humane has been a mascot of the AI investment bubble.

The company emerged in 2021 with $US100 million in funding, and raised the same amount again in 2023, before launching to disastrous reviews and failing to hit even 10 per cent of its sales target.

The main criticism seems to be that it doesn’t do anything a smartphone can’t already do better, apart from projecting lasers onto your hand (granted, you can use finger movements to feed commands back to the device — it’s not not cool).

Now, everyone who bought one has received an email warning that the battery charging case is a fire risk.

There’s apparently no safety issue with the pin itself, but then again, that was never the problem.

Doctors and pharmacists left in the dark 

That, if you’re wondering, is the sound of silence from MediSecure.

As we reported on the weekend, doctors and pharmacists named in a sample of stolen MediSecure data up for sale on the dark web are still waiting to hear from the company.

Three weeks ago it emerged that MediSecure, an Australian electronic prescription provider, had been hacked, and provider details have since been for sale on the darkweb.

Last week, the company went into administration after the federal government denied its request for financial support to handle the fallout — a fact which may or may not be linked to the aforementioned silence.

New York moves to limit kids’ access to ‘addictive feeds’

New York has more or less just banned the algorithm as we know it for under-18s.

The new state law defines an addictive feed as one where the content is recommended based on information about the user — in other words, the definition of the news feeds used by most social media platforms.

Feeds with content listed in chronological order would still be allowed.

It’s a fundamentally different conversation to the one we’re having here in Australia, where the debate has mostly been focused on limiting access to specific categories of content, such as misinformation, or non-consensual deep fakes, or violent videos.

This, on the other hand, is about the systems that promote and reward that content, and push it to the top in the first place.

And if it’s all too much …

Then contemplate the infinite on r/wheresthebottom, the subreddit where there’s no such thing as the bottom of the ocean.

Don’t let the name fool you, it’s SFW, and for those with an allergy to actual conspiracy theories, rest assured it’s not poe-faced.

Think of it as a distant cousin of the ironic Birds Aren’t Real movement, but minus the political undertones.


Leave a Reply

Your email address will not be published. Required fields are marked *