The AI Thread

Sci-fi author 'writes' 97 AI-generated books in nine months

Sci-fi author Tim Boucher has produced over 90 books in nine months, using ChatGPT and the Claude AI assistant.

Boucher, an AI artist and writer, claims to have made nearly $2,000 selling 574 copies of the 97 works.

Each book in his "AI Lore" series is between 2,000 to 5,000 words long - closer to an essay than a novel. They are interspersed with around 40 to 140 pictures, and take roughly six to eight hours to complete, he told Newsweek.

Boucher's superhuman output is down to the use of AI software. He uses Midjourney to create the images, and OpenAI's ChatGPT and Anthropic's Claude to generate text to brainstorm ideas and write stories.

"To those critics who think a 2,000- to 5,000-word written work is 'just' a short story and not a real book, I'd say that these 'not real books' have shown impressive returns for a small, extremely niche indie publisher with very little promotion and basically no overhead," he argued.

Boucher said the technology's current limitations make it more difficult to produce longer passages of text that follow a coherent storyline. Despite these challenges, he said AI has positively impacted his creativity.

AI has divided the sci-fi community. Editors of the Clarkesworld Magazine, for example, consider short stories written by machines to be spam.

Selling an average of six copies per book to make a couple of hundred bucks a month may not be the money fountain authors were hoping AI could provide.
 
Selling an average of six copies per book to make a couple of hundred bucks a month may not be the money fountain authors were hoping AI could provide.

Selling an average of six copies a book does imply the content is rather irrelevant. Especially for an author who had at least a couple of non-AI generated books to his name before this, and might be able to bring a few fans along from that.

They're correct to characterize this as spam. Dumping a hundred titles, even of total rubbish, onto the store you'd expect to get at least a few purchases from people buying the wrong thing, or people buying one title more or less blind. The AI's main function here seems to be to generate things just sophisticated enough not to get automatically filtered out as spam, faster than a human could do it unaided.
 
I saw a video a while back (might've been one of Dan Olsen's, but I can't remember for sure) that covered freelance writers being paid a pittance churning out junk non-fiction books which provided only a very superficial look at their supposed subjects in the hope of basically tricking people into buying them on Amazon (and of course, any profits went ot the people who commissioned the books, not the writers). Looks like AI could be taking even these crappy jobs....
 
On a somewhat related note, could you still prove a wet film picture is real? Now digital images are so manipulable should we all keep a disposable film camera in our car to prove what happened in a crash?

I thought of that before and used to believe that would be the solution however it simply wouldn't work.

You see you can always create a deep fake digital image then take out your old film camera and take a image of the image thereby completely fooling the authorities into thinking it's legit because it's straight up film.
 
I thought of that before and used to believe that would be the solution however it simply wouldn't work.

You see you can always create a deep fake digital image then take out your old film camera and take a image of the image thereby completely fooling the authorities into thinking it's legit because it's straight up film.
Surely wet film is much higher resolution than any digital camera, is it not?
 
Surely wet film is much higher resolution than any digital camera, is it not?

The deep fake would be totally generated by an AI and not require the resolution bottleneck of an actual digital camera. Resolution of display monitors has gotten so good that the top of the line ones have reached resolutions that are difficult for the human eye to detect any discernable difference from how the human eye interprets reality (due to specific evolutionary limits to how the eye evolved there's a cutoff point whereby you can't tell the difference among higher resolutions beyond the maximal resolution of the biological eye).

So in essence have an AI manufacture an image from scratch, display the image on your NASA monitor, then take your film camera and take a snapshot of what's displayed on screen. You now have a deep faked wet film image.
 
@Samson also remember while these digital algorithms may need an original digital image (or several) of yourself to properly put you in the horribly compromising deep fake. There is software that can fill in the bad resolution and enhance the quality of your face, essentially face recognition software has gotten so good it just knows where to fill in flesh where flesh ought to be or may recognize the specific clothes your wearing (because the AI saw it in a Google or Amazon ad) so it would know how to fill in the proper fabric textures and colors to enhance those aspects of your image.
 
@Samson also remember while these digital algorithms may need an original digital image (or several) of yourself to properly put you in the horribly compromising deep fake. There is software that can fill in the bad resolution and enhance the quality of your face, essentially face recognition software has gotten so good it just knows where to fill in flesh where flesh ought to be or may recognize the specific clothes your wearing (because the AI saw it in a Google or Amazon ad) so it would know how to fill in the proper fabric textures and colors to enhance those aspects of your image.
It is the not being able to make out the pixels in the screen I am not convinced by, but you are probably right or one would have heard of the recommendation.
 
It is the not being able to make out the pixels in the screen I am not convinced by, but you are probably right or one would have heard of the recommendation.

Well also many experts are probably extrapolating forward in time when all this stuff will inevitably improve (and get cheaper).

So since no one knows exactly what might happen, they aren't going to spout out solutions to problems prematurely or else their careers/reputations as experts could be jeopardized if they offer a solution that turns out to not quite be as good as gold. Therefore they won't come up with solutions until the problems actually begin and some lives are ruined, that way they'll know the exact pattern of how such victims were done bad and make solutions afterwards going forward. They must remain reactionary not proactive.
 
Surely wet film is much higher resolution than any digital camera, is it not?
Not anymore. The grain in a normal ISO100 36*24mm film (the one most used in normal cameras 15 years ago before everything went digital) is aproximately equivalent to a 20 megapixel digital image (5000*4000). Most modern cellphones can go well beyond that. Mine for instance, which is low-mid range and costed about 300€ like 1-2 years ago, has a sensor of 32 MP and modern pro digital cameras are around 100-150 and there are 200 MP sensors to be releaed soon. Of curse a professional large format camera loaded with specialized low ISO ultrafine grain film can reach up to the equivalent of 500 MP or more but it is not usual.
 
Last edited:
AI is going to be a big push for hardware development. I think we will see dedicated chips and AI cards very soon.
 
US air force denies running simulation in which AI drone ‘killed’ operator

Denial follows colonel saying drone used ‘highly unexpected strategies to achieve its goal’ in virtual test

The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.

An official said last month that in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal”.

Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

No real person was harmed.

Hamilton, who is an experimental fighter test pilot, has warned against relying too much on AI and said the test showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI”.

The Royal Aeronautical Society, which hosted the conference, and the US air force did not respond to requests for comment from the Guardian.

But in a statement to Insider, the US air force spokesperson Ann Stefanek denied any such simulation had taken place.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

The US military has embraced AI and recently used artificial intelligence to control an F-16 fighter jet.

In an interview last year with Defense IQ, Hamilton said: “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.

“We must face a world where AI is already here and transforming our society. AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions – what we call AI-explainability.”
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
 
This one spiraled out so fast.

It's not even clear if that particular "simulation" was done in a computer or merely a thought experiment.
 
I was curious too. From the link:
[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
 
I was curious too. From the link:
It does seem a bit like "If we make flying death machines and put AI in control bad things might happen" and the problem there is the AI not the flying death machines.
 

I think there is excessive alarmism about AI being a threat for humanity which may even lead to extinction, etc, etc. IMO humanity is in so deep troubles created by ourselves without any help and which we don't seem capable to solve out by ourselves that it will get extinct in some decades or centuries at most, therefore we will need any 'exterior' help we can get to survive, and since ETs don't seem very enthusiastic about it, i think AI can be our best chance.
 
Last edited:
I think there is excessive alarmism about AI being a threat for humanity which may even lead to extinction, etc, etc. IMO humanity is in so deep troubles created by ourselves without any help and which we don't seem capable to solve out by ourselves that it will get extinct in some decades or centuries at most, therefore we will need any 'exterior' help we can get to survive, and since ETs don't seem very enthusiastic about it, i think AI can be our best chance.

Extinction event “decades to centuries away” sounds like excessive alarmism too, you know. Even after lights out event, which wiped out dinos tens of millions years ago, life endured. A nuclear world war would be far less severe in its’ impact, in comparison. Yes, we have serious economic problems - food, energy, pollution, poverty, which we better solve over time. We have ideological differences, quite severe. And yet, we hardly face extinction. On the contrary, the technology seems to be less than a century away from being ready to help us create first off world habitats - Moon, Mars, Orbit - greatly diminishing risks of total extinction.

As for AI solving problems: the primary problem the AI will be solving in the near future is solidification of hierarchy of corporate power. I watch world markets as a hobby. So, year and a half long recession petered out around October-December last year. Since then trillions in USD and other forms of capital rushed into technology sector. Which is not unusual. The unusual bit is the level of concentration of newly injected capital currently on display. Most of the money, which was waiting on the sidelines during recession, flown into 5-10 biggest mkt cap corporations in the world. Most of them American. And most of those are clear beneficiaries of AI revolution. Microsoft, Apple, Google, Amazon, Nvidia, etc.

Which is kinda cool on one hand. But also alarming!
 
As for AI solving problems: the primary problem the AI will be solving in the near future is solidification of hierarchy of corporate power. I watch world markets as a hobby. So, year and a half long recession petered out around October-December last year. Since then trillions in USD and other forms of capital rushed into technology sector. Which is not unusual. The unusual bit is the level of concentration of newly injected capital currently on display. Most of the money, which was waiting on the sidelines during recession, flown into 5-10 biggest mkt cap corporations in the world. Most of them American. And most of those are clear beneficiaries of AI revolution. Microsoft, Apple, Google, Amazon, Nvidia, etc.
I am not convinced. The open source tools are moving so fast that I do not see the corporate power are winning at the moment. If people start putting out their improvements under the GPL that will screw the companies, and the GDPR could hamstring companies while leaving individuals free to utilise LLMs that process personal data.
 
I am not convinced. The open source tools are moving so fast that I do not see the corporate power are winning at the moment. If people start putting out their improvements under the GPL that will screw the companies, and the GDPR could hamstring companies while leaving individuals free to utilise LLMs that process personal data.

The open source dynamic will proceed in parallel, I can agree with this. My view is that centralised AI process will be more efficient in attracting, by way of renumeration, best specialists. Therefore, centralised process will remain dominant in terms of market share. The same dynamic can be observed in other forms of software development. There is a thriving game mod community, but it’s the centralised dev studios, which rake in most of consumer cash. Corporations can use patents, censorship, legislation to protect their interest and various pathways on the way to their interest. Opensource community can not, by definition.

And yeah, you’re absolutely right about a big push in open source. I am not big on the subject myself, but specialist programmers I watched on yt mention that many open source alternatives are nearly as good as gpt4

Open source is moving fast primarily because mainstream process is moving at a frightening speed.
 
Back
Top Bottom