The AI Thread

Midjourney works as a bot in discord, so you need to install discord and subscribe to Midjourney channel. Then you have to pay a monthly fee. Not free generation in Midjourney anymore afaik. Generation is done asking the bot in the chat to generate an image using /imagine [prompt].

Even if closely followed by Stable Diffusion custom models, quality wise Midjourney continues being the best, at least for basic text to image work. For advanced users the only reasonable choice is Stable Diffusion through Automic1111 or ComfyUI, that way you can control everything, all kind of add-ons, models, workflows, etc, inpaint, outpaint, even video generation in several ways, anything... All locally ran, and therefore uncensored and for free, the possibilities are infinite. Only issue is you need a good graphic card and to learn a couple of things.

For image generation from text for free and without complications there is always the possibility of using Dall-E trough Bing or Copilot. Unlimited generations free of charge, with very high quality but limited to 1024x1024, jpg, and intensely censored. It will always generate 4 images to choose.

A volcano according to Bing:
View attachment 689698View attachment 689699View attachment 689700View attachment 689701
I didn't understand it all but here's how my brain interpreted it: "Midjourney is the best but using it is a pain in the arse therefore you should go for Bing using Dall-E engine".

For now it seems to work great thanks. :)
 
I didn't understand it all but here's how my brain interpreted it: "Midjourney is the best but using it is a pain in the arse therefore you should go for Bing using Dall-E engine".

For now it seems to work great thanks. :)
Midjourney has the highest quality, it is easy to use but it is limited in scope and expensive. Stable Diffusion is unlimited in scope, quality wise it depends on user knowledge, it is free to use but it is complicated and you need a powerful pc. Bing is the easiest to use, it is free and it has good quality, but it is very limited.

So, yep, according to your previous post Bing seems the best option for you.
 
The discord method is confusing af. I've tried a couple times over the past year.

Discord itself is a huge pain. I get alerts someone @'d me and it won't take me to the post.
 
I use discord occasionally, works fine for me as far as I can say. Long time since i used Midjourney though (it was still free back then). Now it is like $20 a month minimum, $120 if you wanna generate at acceptable speed and such. Doesn't worth it, much less when we can use Dall•E 3 for free just for asking Bing.
 

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
 
I am surprised that sort of thing passed the legal sniff test. Over here in Europe I know for a fact that such lifelong binding clauses would be dismissed by any court.
 

If a departing employee declines to sign the document, or if they violate it, they can lose all vested equity they earned during their time at the company, which is likely worth millions of dollars. One former employee, Daniel Kokotajlo, who posted that he quit OpenAI “due to losing confidence that it would behave responsibly around the time of AGI,” has confirmed publicly that he had to surrender what would have likely turned out to be a huge sum of money in order to quit without signing the document.
I do not quite understand this. If they have earned equity, how can they be required to give it up? If they can be required to give it up to leave a company surely they have not earned it, and it it actually a golden handshake?

Also, WTF: Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges
 
what do you mean?
Effective altruism, as I understand it, is the billionaires philosophy that means they can be evil now as long as they help the world develop so more people live in the future, or something.
 
Nice piece in the NYT today suggesting that AI has been overhyped, with this nice especially nice line in it:

It feels like another sign that A.I. is not even close to living up to its hype. In my eyes, it's looking less like an all-powerful being and more like a bad intern whose work is so unreliable that it's often easier to do the task yourself.
 
Nice piece in the NYT today suggesting that AI has been overhyped, with this nice especially nice line in it:
I suspect they may have not tried to use it for anything it is good for. Here is a recent article about people who find value in it, even if the work needs human input.

Why mathematics is set to be revolutionized by AI

Giving birth to a conjecture — a proposition that is suspected to be true, but needs definitive proof — can feel to a mathematician like a moment of divine inspiration. Mathematical conjectures are not merely educated guesses. Formulating them requires a combination of genius, intuition and experience. Even a mathematician can struggle to explain their own discovery process. Yet, counter-intuitively, I think that this is the realm in which machine intelligence will initially be most transformative.

In 2017, researchers at the London Institute for Mathematical Sciences, of which I am director, began applying machine learning to mathematical data as a hobby. During the COVID-19 pandemic, they discovered that simple artificial intelligence (AI) classifiers can predict an elliptic curve’s rank — a measure of its complexity. Elliptic curves are fundamental to number theory, and understanding their underlying statistics is a crucial step towards solving one of the seven Millennium Problems, which are selected by the Clay Mathematics Institute in Providence, Rhode Island, and carry a prize of US$1 million each. Few expected AI to make a dent in this high-stakes arena.

AI has made inroads in other areas, too. A few years ago, a computer program called the Ramanujan Machine produced new formulae for fundamental constants, such as π and e. It did so by exhaustively searching through families of continued fractions — a fraction whose denominator is a number plus a fraction whose denominator is also a number plus a fraction and so on. Some of these conjectures have since been proved, whereas others remain open problems.

Another example pertains to knot theory, a branch of topology in which a hypothetical piece of string is tangled up before the ends are glued together. Researchers at Google DeepMind, based in London, trained a neural network on data for many different knots and discovered an unexpected relationship between their algebraic and geometric structures.

How has AI made a difference in areas of mathematics in which human creativity was thought to be essential?

First, there are no coincidences in maths. In real-world experiments, false negatives and false positives abound. But in maths, a single counterexample leaves a conjecture dead in the water. For example, the Pólya conjecture states that most integers below any given integer have an odd number of prime factors. But in 1960, it was found that the conjecture does not hold for the number 906,180,359. In one fell swoop, the conjecture was falsified.

Second, mathematical data — on which AI can be trained — are cheap. Primes, knots and many other types of mathematical object are abundant. The On-Line Encyclopedia of Integer Sequences (OEIS) contains almost 375,000 sequences — from the familiar Fibonacci sequence (1, 1, 2, 3, 5, 8, 13, ...) to the formidable Busy Beaver sequence (0, 1, 4, 6, 13, …), which grows faster than any computable function. Scientists are already using machine-learning tools to search the OEIS database to find unanticipated relationships.

AI can help us to spot patterns and form conjectures. But not all conjectures are created equal. They also need to advance our understanding of mathematics. In his 1940 essay A Mathematician’s Apology, G. H. Hardy explains that a good theorem “should be one which is a constituent in many mathematical constructs, which is used in the proof of theorems of many different kinds”. In other words, the best theorems increase the likelihood of discovering new theorems. Conjectures that help us to reach new mathematical frontiers are better than those that yield fewer insights. But distinguishing between them requires an intuition for how the field itself will evolve. This grasp of the broader context will remain out of AI’s reach for a long time — so the technology will struggle to spot important conjectures.

But despite the caveats, there are many upsides to wider adoption of AI tools in the maths community. AI can provide a decisive edge and open up new avenues for research.

Mainstream mathematics journals should also publish more conjectures. Some of the most significant problems in maths — such as Fermat’s Last Theorem, the Riemann hypothesis, Hilbert’s 23 problems and Ramanujan’s many identities — and countless less-famous conjectures have shaped the course of the field. Conjectures speed up research by pointing us in the right direction. Journal articles about conjectures, backed up by data or heuristic arguments, will accelerate discovery.

Last year, researchers at Google DeepMind predicted 2.2 million new crystal structures. But it remains to be seen how many of these potential new materials are stable, can be synthesized and have practical applications. For now, this is largely a task for human researchers, who have a grasp of the broad context of materials science.

Similarly, the imagination and intuition of mathematicians will be required to make sense of the output of AI tools. Thus, AI will act only as a catalyst of human ingenuity, rather than a substitute for it.
 
I suspect they may have not tried to use it for anything it is good for.
You may well be correct. I will go back and read it to see the extent to which it references the field of mathematics specifically.

My own interest is of course its capacity for verbal tasks.

So, first:
First, there are no coincidences in maths
That may make AI more relevant to that field than to others. But remember the article is about how the instruments have been hyped, and that hype extends into non-mathematical fields as well.

Second:
AI has made inroads in other areas, too. A few years ago, a computer program called the Ramanujan Machine produced new formulae for fundamental constants, such as π and e. It did so by exhaustively searching through families of continued fractions — a fraction whose denominator is a number plus a fraction whose denominator is also a number plus a fraction and so on. Some of these conjectures have since been proved, whereas others remain open problems.
This doesn't sound to much to me like AI specifically, so much as the brute computing power that computers have always had.

(but see "Sixth and lastly")

Third:
In 2017, researchers at the London Institute for Mathematical Sciences, of which I am director, began applying machine learning to mathematical data as a hobby. During the COVID-19 pandemic, they discovered that simple artificial intelligence (AI) classifiers can predict an elliptic curve’s rank — a measure of its complexity. Elliptic curves are fundamental to number theory, and understanding their underlying statistics is a crucial step towards solving one of the seven Millennium Problems, which are selected by the Clay Mathematics Institute in Providence, Rhode Island, and carry a prize of US$1 million each. Few expected AI to make a dent in this high-stakes arena.
it feels to me that when AI solves one of the Millennium Problems (rather than makes a dent in it), then the hype may be warranted

(but see "Sixth and lastly")

Fourth, this
For now, this is largely a task for human researchers, who have a grasp of the broad context of materials science.
seems to me to confirm the article's point: that AI is more like an intern than a project manager. Human interpretation of its results is still a critical, indispensable component of the process.

(but see "Sixth and lastly")

Fifth, this
Similarly, the imagination and intuition of mathematicians will be required to make sense of the output of AI tools. Thus, AI will act only as a catalyst of human ingenuity, rather than a substitute for it.
seems incorrectly phrased, based on the preceding points. In no case did what AI provide serve as a catalyst for human thinking. Human thinking on these problems was already underway, and AI was employed by humans as a tool for exploring the problem.

(But see "Sixth and lastly.")

Sixth and lastly, I don't know anything about math, so you may feel free to discount all of my points except for my first one.
 
This doesn't sound to much to me like AI specifically, so much as the brute computing power that computers have always had.
I actually posted about this in 2021, but I still do not have my head around it. It really is not brute forcing it:

We present two algorithms that proved useful in finding conjectures: a variant of the meet-in-the-middle algorithm and a gradient descent optimization algorithm tailored to the recurrent structure of continued fractions.​
seems incorrectly phrased, based on the preceding points. In no case did what AI provide serve as a catalyst for human thinking. Human thinking on these problems was already underway, and AI was employed by humans as a tool for exploring the problem.
I think the point is the conjecture serves as the catalyst. So the computer says something like

And the humans have to prove it.

If you want more standalone useful maths what about the matrix maths where the computer found algorithms that beat the sum of all human effort in the field, in some instances.

Spoiler Matrix maths algorithms :
The red ones are computers better than anything humans have come up with
 
If you want more standalone useful maths
See "sixth and lastly," above. It's not so much that I don't want it as that my mind wouldn't be able to do anything with it.

Regarding your point about "catalyst." Anything can serve as a catalyst for the human mind, including a computer's results on some previous project it was set. But then you'd have to call the humans setting that project as the AI's "catalyst."
 
See "sixth and lastly," above. It's not so much that I don't want it as that my mind wouldn't be able to do anything with it.
I do not really know anything about matrix maths, but what that red 47 on the third row for example means is that if you were multiplying two 4 x 4 x 4 matrices together before this paper you would have referred to to a 1969 algorithm and done 49 operations. Now you can use a new algorithm and only do 47. That may be something vitally important such as giving you a few more fps on a computer game.

That alone seems like a pretty important step taken by machine that advances knowledge.
Regarding your point about "catalyst." Anything can serve as a catalyst for the human mind, including a computer's results on some previous project it was set. But then you'd have to call the humans setting that project as the AI's "catalyst."
I would agree.
 
The other recent news that demonstrates utility, though I dislike that they have keep this version closed source because of terrorism or something. It sounds like Google DeepMind's AlphaFold 3 is changing things again by predicting how proteins can interact with other molecules. Modelling how proteins interact with one another and with RNA/DNA will allow so much basic biology, and modelling how they interact with drugs could bring in a golden age of drug discovery.
 
Nice piece in the NYT today suggesting that AI has been overhyped, with this nice especially nice line in it:

These sort of lines are usually revealing of the layman's tendency to ask AI all the wrong questions, get dull non-answers to then declare AI raw, obsolete, non-intelligent or whatever. Chatbot is aggregator, amplifier of stuff, including human stupidity. Usually, if the human gets terrible answers, the first thing I suggest is to change the human in front of the machine.

It feels like another sign that A.I. is not even close to living up to its hype. In my eyes, it’s looking less like an all-powerful being and more like a bad intern whose work is so unreliable that it’s often easier to do the task yourself.

I have a counter-anecdote: A colleague of mine was tasked to translate 30 pages, he tried to convince chat gpt to translate the whole lot. The AI did a seemingly odd thing - translated the first page of a document and then crashed. So, the colleague quickly dismissed AI as too raw to perform complex tasks and did 30-page translation by hand (oof). In fact, he missed one important bit of knowledge (rtfm!) necessary to integrate AI into workflow. Being limited by shortage of hardware, the TOKEN system was adopted by OpenAI, where each message the operator sends is limited by certain amount of tokens (8192) in order to conserve the operations per second and to spread available hardware between clients in a fair way. 1 token = 4 characters (3/4 of the word)

A smart operator, who read a short manual, would accompany the 30-page document sent to AI with a two line sentence-instruction explaining that each output from AI should have a few-thousand token limit, then AI should stop and wait for operator's one word prompt to continue translation.

But who wants to read manuals these days? Everyone wants AI to magically solve problems. It's not that simple. AI needs intelligent human as a component for successful output. And the more complex the task, the more intelligent the human prompting should be.
 
Last edited:

Google AI search says to glue pizza and eat rocks​

Google's new artificial intelligence (AI) search feature is facing criticism for providing erratic, inaccurate answers.
Its experimental "AI Overviews" tool has told some users searching for how to make cheese stick to pizza better that they could use "non-toxic glue".
The search engine's AI-generated responses have also said geologists recommend humans eat one rock per day.
A Google spokesperson told the BBC they were "isolated examples".
Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.
They have been widely mocked on social media.
But Google insisted the feature was generally working well.
"The examples we've seen are generally very uncommon queries, and aren’t representative of most people’s experiences," it said in a statement.
"The vast majority of AI overviews provide high quality information, with links to dig deeper on the web."
It said it had taken action where "policy violations" were identified and was using them to refine its systems.
It is not the first time the company has run into problems with its AI-powered products.
In February, it was forced to pause its chatbot Gemini which was criticised for its "woke" responses.
Gemini's forerunner, Bard, also got off to a disastrous start.

Google began trialling AI overviews in search results for a small number of logged-in UK users in April, but launched the feature to all US users at its annual developer showcase in mid-May.
It works by using AI to provide a summary of search results, so users do not have to scroll through a long list of websites to find the information they are seeking.
It is billed as a product that "can take the legwork out of searching" though users are warned it is experimental.
However, it is likely to be widely used - and trusted - because Google search remains the go-to search engine for many.
According to web traffic tracker, Statcounter, Google's search engine accounts for more than 90% of the global market.
Google is far from the only tech facing a backlash over its attempts to cram more AI tools into their consumer-facing products.
The UK's data watchdog is looking into Microsoft after it announced a feature coming to its new range of AI-focused PCs that would take continuous screenshots of their online activity.
And ChatGPT-maker OpenAI was called out by Hollywood actress Scarlett Johansson for using a voice likened to her own, saying she turned down its request to voice the popular chatbot.
https://www.bbc.com/news/articles/cd11gzejgz4o
 
Yeah, part of the hallucination problem is that more and more data is sucked into this gpt black hole, but utility isn't linear. We appear to have law of diminishing returns kick in - at a certain point no amount of raw data and no amount of accelerators/GPUs will achieve meaningful increase in quality of outputs. Some say the answer is in "AI agents". Look it up, it's a whole separate rabbit hole, roughly meaning AI specialisation. The core of the problem remains low quality of input data. So that needs to be figured out first in order to make the next qualitative leap. Instead of throwing GPUs at the problem, carefully tuning crystal clean informational torrents is what's required.
 
carefully tuning crystal clean informational torrents is what's required.
But wasn't the whole starting design premise, "oh, they can just go scour the web; the web's got everything"?

And isn't that broad spectrum of information crucial to the magic of their being able to respond to any prompt you put to them?

GIGO. And, from one point of view, the web is garbage.
 
Top Bottom