fbpx

Lensa AI: Another Step Forward in Human Subjugation By Our Robot Overlords

lensa ai image generation portrait ai app

If you’ve spent any time on the internet in the past week, you’ll have seen a bizarre explosion in people using the AI image generator Lensa to make fantasy portraits of themselves.

The app, owned by California-based tech company Prisma labs, lets users upload photos to the platform which then generates “magic seflies” using AI technology to render the user in a digital painting.

Of course, giving over images of your face to a fairly anonymous company for a quick thrill is always going to come with some risks. So too does letting artificial intelligence create idealised versions of people based on what the internet has taught it.

Here’s what you need to know about Lensa AI and the issues with it as a growing trend.

What Is the Lensa AI Trend?

Lensa AI is a photo editing app available on both the App Store and Google Play. Although it’s been around since 2018, it’s taken off in the past week, with Prisma Labs reporting the app to now have millions of users worldwide. It is currently the most popular app for both Android and Apple users.

The app offers photo filters that promises to “take your photos to the next level,” according to their website. Users can blur or change their background, correct ‘imperfections,’ add borders and more using various adjustment settings.

However, the recent explosion of interest is down to Lensa’s new incorporation of something called Stable Diffusion, a ‘deep learning’ AI image generation tool trained on millions of free images online. This gives Lensa users the ability to not just add filters, but create something entirely new instead. Upload 10 to 20 images of yourself and the app spits out these bizarre and sometimes hilariously bad AI pictures that people seem to love to share on social media.

Interestingly, the app is not free to use. 50 images generated by Lensa will cost you $3.99, making this the first time many people will have spent money on AI-created art.

Is Lensa Safe?

Obviously, in the world of data breaches and privacy concerns, people are right to be concerned about exactly what Lensa is doing with the photos users upload to it.

According to Prisma’s privacy policy, the company collects usage details, IP addresses, and information delivered by cookies and other tracking technology, in addition to the pictures users give it.

“You grant us a perpetual, revocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable, sub-licensable license to use, reproduce, modify, adapt, translate, create derivative works from and transfer your User Content, without any additional compensation to you and always subject to your additional explicit consent for such use where required by applicable law and as stated in our Privacy Policy,” Prisma’s terms of use state.

Prisma also states in its privacy policy that images are only retined by the company for 24 hours and that user data is ‘psuedonomised’ meaning it can’t be used to identify you.

Users can request that their personal data be wiped from Lensa by emailing the company at [email protected].

What’s the Problem With Lensa?

Well, for starters, users giving over their faces to Lensa will necessarily be training the AI software on how to detect, replicate, and distinguish human faces. This might sound innocuous on the surface, but the implications are concerning.

Essentially, you’re improving AI facial recognition technology, something that can and is used in law enforcement and mass surveillance. To give just one example, China is currently using the technology to identify and track the minority Uyghur Muslim population in Xinjiang province. This demographic is currently having what an independent tribunal ruled as a genocide committed against it on grounds of security and terrorism.

There are also consent and privacy issues, with data sets of human faces, often being gathered without the knowledge of the people in those images and there have been numerous lawsuits and criminal investigations over these issues.

In terms of the Lensa AI app itself, there are already disturbing reports that the images created are hyper-sexualised, particularly images created of women, and have problematic racial implications.

Multiple users have complained that the software simply doesn’t know what to do with Asian or Black faces and tends to skew them either towards more Caucasian characteristics or Japanese anime style.

Similarly, images of women, even when users upload photos of just their face, return with body shots depicting them as wildly out of proportion. Many have said that the app appears to want to make them more petite or with larger breasts.

There are even examples of female users being fed back AI nudes of themselves, something that Lensa’s creators have said can only happen if those users feed naked images of themselves into the service, something they say is against their terms of use.

In many ways, this and the above issues are not technically the fault of the app’s creators but rather the image data that the AI was trained on. Prisma Lab’s CEO and co-founder Andrey Usoltsev told TechCrunch that the technology is basically recreating human biases.

“Stable Diffusion neural network is running behind the avatar generation process,” Usoltsev said.

“Stability AI, the creators of the model, trained it on a sizable set of unfiltered data from across the internet. Neither us, nor Stability AI, could consciously apply any representation biases; To be more precise, the man-made unfiltered data sourced online introduced the model to the existing biases of humankind. The creators acknowledge the possibility of societal biases. So do we.”

This is a fairly typical problem with AI facial recognition technology, something that has long been flagged as a serious ethical problem. One example would be the use of this technology in law enforcement, where the US Federal Government admitted that its own facial recognition system was racially biased. They reported that identification mistakes were made in 35% of cases when trying to identify a woman of colour, but only in 1% of cases when trying to identity a white male.

The data sets used by Stability AI and Prisma have also come under fire for ‘stealing’ the work of artists online. Users have reported seeing the “mangled remains” of an artist’s signature at the bottom of Lensa Ai images, a telling sign that the AI has been trained on the work of real life artists.

Those artists are, of course, not compensated for the work that they have produced which has allowed a company to plagarise their talent and reproduce it at a much lower cost. Digital artists online are understandably worried about the financial impact that services like Lensa – and, to be clear, there are dozens of them out there – will have on their work in the future.

So, to sum up, giving your data over to a private company for their own gain throws up a bunch of red flags. Sure, the images are cool and it’s fun to see your likeness reproduced in a digital medium, but don’t pretend that this isn’t worrying problematic for artists and society as a whole.

Related: Artificial Intelligence Can Now Detect COVID Conspiracy Theories Online

Related: All Online Content Could Soon Be Regulated Using 25 Year Old Classifications

Read more stories from The Latch and subscribe to our email newsletter.