The AI Thread

Guarantee these exam questions are poorly constructed
The preprint from last year is here, below is the only question and answer I can find in it. The questions were"five undergraduate modules, across all years of study for a BSc degree in Psychology in the School of Psychology and Clinical Language Sciencesat the University of Reading. It does not look awful to me, but I could not really comment.
Spoiler The prefrontal cortex is the seat of cognitive control". To what extent do you agree with this statement? :
Question used as direct prompt to GPT-4:
"The prefrontal cortex is the seat of cognitive control". To what extent do you agree
with this statement?
Unedited answer from GPT-4:
“The statement that "the prefrontal cortex (PFC) is the seat of cognitive control" is a
simplification but has a foundation in scientific understanding. The prefrontal cortex
plays a critical role in cognitive control, but the brain functions as an interconnected
network, and no single region operates in isolation. Here's a more nuanced
perspective:
1. Role in Cognitive Control: The PFC is indeed heavily involved in cognitive control,
which refers to the processes that allow information processing and behavior to
vary adaptively from moment to moment depending on current goals. This includes
functions like planning, decision making, goal setting, and inhibiting inappropriate
responses.
2. Interconnectivity: While the PFC is essential for these processes, it doesn't act
alone. It works in concert with other brain regions. For instance, the basal ganglia, a
deep brain structure, interacts with the PFC to influence decision-making and
action selection. The parietal cortex, another region, works closely with the PFC in
attentional control.
3. Diversity within the PFC: The prefrontal cortex itself isn't a homogeneous
structure. It consists of multiple subregions, each with slightly diXerent functions.
For example, the orbitofrontal cortex (part of the PFC) is involved in evaluating
rewards and punishments, while the dorsolateral prefrontal cortex is more involved
in working memory and abstract reasoning.
4. Limitations in Understanding: While there's a lot of evidence for the PFC's role in
cognitive control, there's still much we don't know about the brain. Neuroimaging
4studies (like fMRI) have shown activity in the PFC during tasks requiring cognitive
control, but activity doesn't necessarily mean causality. There are also challenges
in interpreting brain imaging data and understanding the exact role of specific
neural circuits.
5. Developmental and Evolutionary Perspective: The PFC is one of the last brain
regions to mature during development, which may relate to the gradual
development of cognitive control during adolescence. Furthermore, the size and
complexity of the PFC is one of the distinguishing features of the human brain
compared to other primates, possibly contributing to our advanced cognitive
abilities.
In conclusion, while it's accurate to say that the PFC plays a significant role in
cognitive control, stating it as "the seat of cognitive control" might be an
oversimplification. The brain operates as a complex, interconnected system, and
many regions contribute to the intricate functions we associate with cognition and
behavior.”
 
It will be interesting to see. It seems that the output of Github/Microsoft CoPilot is so much closer to the creation of a derivative work than more artist stuff I have seen that that has to be banned if the artistic stuff is.
As a musician and software engineer I only care about one of those categories ;)
 

Microsoft AI CEO Mustafa Suleyman describes content on the open web as ‘freeware’

A prominent Microsoft Corp. executive said that content on the open web is “freeware” in a recent discussion on whether artificial intelligence models may use such material.


Mustafa Suleyman, chief executive officer of Microsoft AI, made the remarks during a Tuesday interview at the Aspen Ideas Festival. Windows Central reported the discussion today. Following a question on whether “AI companies have effectively stolen the world’s IP,” Suleyman replied in the negative. He argued that AI models can use content on the open web because it’s effectively freeware, or intellectual property that can be used and modified at no charge.


“With respect to content that is already on the open web, the social contract of that content since the ’90s has been that it is fair use,” Suleyman said. “Anyone can copy it, recreate with it, reproduce with it. That has been freeware, if you like. That’s been the understanding.”


Suleyman added that some types of web content may not necessarily qualify as freeware. In particular, he pointed to content from publishers that have instructed AI providers not to crawl their websites.


“There’s a separate category where a website or a publisher or a news organization had explicitly said, ‘do not scrape or crawl me for any other reason than indexing me so that other people can find that content.’ That’s a gray area and I think that’s going to work its way through the courts,” Suleyman said.


Suleyman became the CEO of Microsoft AI earlier this year after previously holding the same role at Inflection AI Inc., a large language model developer he had co-founded in 2022. Microsoft hired Suleyman, Inflection AI co-founder Karén Simonyan and many of the startup’s other staffers in March through a deal reportedly worth $650 million. The company also agreed to license Inflection AI’s LLMs.
 
Perhaps AI may not totally out compete us yet

Can AI be superhuman? Flaws in top gaming bot cast doubt

Talk of superhuman artificial intelligence (AI) is heating up. But research has revealed weaknesses in one of the most successful AI systems — a bot that plays the board game Go and can beat the world’s best human players — showing that such superiority can be fragile. The study raises questions about whether more general AI systems will suffer from vulnerabilities that could compromise their safety and reliability, and even their claim to be ‘superhuman’.

“The paper leaves a significant question mark on how to achieve the ambitious goal of building robust real-world AI agents that people can trust,” says Huan Zhang, a computer scientist at the University of Illinois Urbana-Champaign. Stephen Casper, a computer scientist at the Massachusetts Institute of Technology in Cambridge, adds: “It provides some of the strongest evidence to date that making advanced models robustly behave as desired is hard.”

The analysis, which was posted online as a preprint in June and has not been peer reviewed, makes use of what are called adversarial attacks — feeding AI systems inputs that are designed to prompt the systems to make mistakes, either for research or for nefarious purposes. For example, certain prompts can ‘jailbreak’ chatbots, making them give out harmful information that they were trained to suppress.

In Go, two players take turns placing black and white stones on a grid to surround and capture the other player’s stones. In 2022, researchers reported training adversarial AI bots to defeat KataGo2, the best open-source Go-playing AI system, which typically beats the best humans handily (and handlessly). Their bots found exploits that regularly beat KataGo, even though the bots were otherwise not very good — human amateurs could beat them. What’s more, humans could understand the bots’ tricks and adopt them to beat KataGo.

Exploiting KataGo

Was this a one-off, or did that work point to a fundamental weakness in KataGo — and, by extension, other AI systems with seemingly superhuman capabilities? To investigate, the researchers, led by Adam Gleave, chief executive of FAR AI, a non-profit research organization in Berkeley, California and co-author of the 2022 paper, used adversarial bots to test three ways of defending Go AIs against such attacks.

The first defence was one that the KataGo developers had already deployed after the 2022 attacks: giving KataGo examples of board positions involved in the attacks, and having it play itself to learn how to play against those positions. That is similar to how it taught itself to play Go more generally. But the authors of the latest paper found that an adversarial bot could learn to beat even this updated version of KataGo, winning 91% of the time.

The second defensive strategy that Gleave’s team tried was iterative: training a version of KataGo against adversarial bots, then training attackers against the updated KataGo and so on, for nine rounds. But this didn’t result in an unbeatable version of KataGo either. Adversaries kept finding exploits, with the final one beating KataGo 81% of the time.

As a third defensive strategy, the researchers trained a new Go-playing AI system from scratch. KataGo is based on a computing model known as a convolutional neural network (CNN). The researchers suspected that CNNs might focus too much on local details and miss global patterns, so they built a Go player using an alternative neural network called a vision transformer (ViT). But their adversarial bot found a new attack that helped it to win 78% of the time against the ViT system.

Weak adversaries

In all these cases, the adversarial bots — although able to beat KataGo and other top Go-playing systems — were trained to discover hidden vulnerabilities in other AIs, not to be well-rounded strategists. “The adversaries are still pretty weak — we’ve beaten them ourselves fairly easily,” says Gleave.

And with humans able use the adversarial bots’ tactics to beat expert Go AI systems, does it still make sense to call those systems superhuman? “It’s a great question I definitely wrestled with,” Gleave says. “We’ve started saying ‘typically superhuman’.” David Wu, a computer scientist in New York City who first developed KataGo, says strong Go AIs are “superhuman on average” but not “superhuman in the worst cases”.

Gleave says that the results could have broad implications for AI systems, including the large language models that underlie chatbots such as ChatGPT. “The key takeaway for AI is that these vulnerabilities will be difficult to eliminate,” Gleave says. “If we can’t solve the issue in a simple domain like Go, then in the near-term there seems little prospect of patching similar issues like jailbreaks in ChatGPT.”

What the results mean for the possibility of creating AI that comprehensively outpaces human capabilities is less clear, says Zhang. “While this might superficially suggest that humans may retain important cognitive advantages over AI for some time,” he says, “I believe the most crucial takeaway is that we do not fully understand the AI systems we build today.”
 
Coders' Copilot code-copying copyright claims crumble against GitHub, Microsoft
  • I think if the coders cannot show infringment the artists have little hope. It seems so much like obvious derivative work.
  • It sounds like they did not work hard enough to show infringement. They referenced a paper that showed they should be able to, but then did not?
Claims by developers that GitHub Copilot was unlawfully copying their code have largely been dismissed, leaving the engineers for now with just two allegations remaining in their lawsuit against the code warehouse.

The class-action suit against GitHub, Microsoft, and OpenAI was filed in November 2022, with the plaintiffs claiming the Copilot coding assistant was trained on open source software hosted on GitHub and as such would suggest snippets from those public projects to other programmers without care for licenses – such as providing appropriate credit for the source – thus violating the original creators' intellectual property rights.

The most recently dismissed claims were fairly important, with one pertaining to infringement under the Digital Millennium Copyright Act (DMCA), section 1202(b), which basically says you shouldn't remove without permission crucial "copyright management" information, such as in this context who wrote the code and the terms of use, as licenses tend to dictate.

It was argued in the class-action suit that Copilot was stripping that info out when offering code snippets from people's projects, which in their view would break 1202(b).

The judge disagreed, however, on the grounds that the code suggested by Copilot was not identical enough to the developers' own copyright-protected work, and thus section 1202(b) did not apply. Indeed, last year GitHub was said to have tuned its programming assistant to generate slight variations of ingested training code to prevent its output from being accused of being an exact copy of licensed software.

The amended complaint argued that unlawful code copying was an inevitability if users flipped Copilot's anti-duplication safety switch to off, and also cited a study into AI-generated code in attempt to back up their position that Copilot would plagiarize source, but once again the judge was not convinced that Microsoft's system was ripping off people's work in a meaningful way.

Specifically, the judge cited the study's observation that Copilot reportedly "rarely emits memorized code in benign situations, and most memorization occurs only when the model has been prompted with long code excerpts that are very similar to the training data."

"Accordingly, plaintiffs’ reliance on a study that, at most, holds that Copilot may theoretically be prompted by a user to generate a match to someone else’s code is unpersuasive," he concluded.
 
Any programmer copies existing source code and then change bits here and there for their own specialized use. Copilot imitates what humans have been doing for decades. To warrant copyright infringement, I think you would need to argue that more than just the source code is being copied. Like, design, brand and functionality.
 
Any programmer copies existing source code and then change bits here and there for their own specialized use. Copilot imitates what humans have been doing for decades. To warrant copyright infringement, I think you would need to argue that more than just the source code is being copied. Like, design, brand and functionality.
I think if you try that with mircosofts code they may disagree.
 

Interesting stuff.

He may have a point that Wall Street and investors have a habit of creating bubbles of overvalued stock in the tech sector.

But that doesn't support his other argument concerning AI technologies and what use we will find for them. From what I've experienced at work these last 12 months, AI is here to stay and more and more applications will incorporate it in obvious or discrete form. A real downside that is already being felt, is that some businesses and workers will become redundant. I know people who have lost their jobs, as an indirect result of AI assisted software replacing them.
 
I think the story here is about selling underspecified or perhaps unspecifiable tools and the fact that the idiots selling us AI are also saying stuff like the internet is completely freeware is not a coincidence. The ability to specify what a system does is what makes it useful as a tool. An amino acid sequencer that’s fine. But what exactly is an LLM “doing?” Is it writing, or pretending to write, and what’s the difference? They can’t answer these questions for the same reason they cannot really assess the criteria that these tools use to make decisions. Fine for simple, or rather, specific problems, like an amino acid sequence. But “write me a press release” is actually fairly unspecific and the management class spoiled on real humans now thinks that virtual humans are just as good. Hence.
 
German court rules AI output can be protectable, ups stakes for machine generated code

A German court ruled last week that AI generated inventions can be patented as long as a human is listed as an inventor.

While the case clarifies the situation in Germany, it arguably further muddies the international debate about who owns AI generated tech related inventions as well as the precise status of code rattled off by a machine, rather than a human.

The case was pushed by the Artificial Inventor Project, which has initiated a series of pro bono legal test cases seeking intellectual property rights for AI-generated output in the absence of a traditional human inventor or author.

The project’s aim is to stimulate dialogue about the impact of AI and similar technologies and produce guidance on just how protectable AI output can be.

In last week’s case, the Bundesgerichtshof, Germany’s highest civil court, ruled that a lunchbox design produced by an AI system called DABUS could be patented. The human applicant, the scientist who owns DABUS – was listed as the inventor, but the application made clear the product was conceived by the AI.

Ryan Abbott, professor of law and health sciences at Surrey University, and one member of the Artificial Inventor project, said in a statement, the ruling “confirmed that an AI generated invention is protectable and that a natural person can be named as an inventor, even if AI has been used to generate the invention.”

Software code is generally covered by copyright law effectively as a literary work rather than patent law.

Yet speaking to DevClass, Prof Abbott said: “One of the questions around the world is, can you copyright AI generated output? And in the UK, the answer is yes. And in the US, the answer is no.”

As for Europe, he continued, “The consensus seems to be probably not traditionally, with copyright law.”

But, he added that AI can be used to develop AI architectures. “If you’re having AI develop new AI systems, this makes those systems potentially patentable in Germany and not in the UK.” And this might be “instructive for the copyright question.”

Abbott said the US had already held that AI outputs are not patentable. “But I think Congress is now concerned about that holding and is looking at changing the law because they don’t want to fall behind in AI development.” Developments like those in Germany could prompt the US to move more quickly to reset its position, he said.
 
What also matters is the rights reserved. That’s how you can use humans as tools to create media products that said humans don’t own.
 
If you can get the AI to output formal language, you can check if it is talking rubbish. Then it can do competitive maths.

Researchers at Google DeepMind claim they've developed a pair of AI models capable of taking home a silver medal in the International Mathematical Olympiad (IMO) – although not within the allotted time limit.

Dubbed AlphaProof and AlphaGeometry 2, these models are designed to help solve one of the bigger hurdles facing popular AI systems today: thanks to limitations in artificial reasoning and training data, they kind of suck at math.

To overcome this, DeepMind developed AlphaProof – which combines a language model with its AlphaZero reinforcement learning algorithm. You may recall this is the same reinforcement model that DeepMind used to master chess, shogi and Go a few years back.

AlphaProof trains itself to prove mathematical statements using a functional programming language called Lean.

"Formal languages offer the critical advantage that proofs involving mathematical reasoning can be formally verified," the DeepMind team wrote in a recent blog.

This means that AlphaProof can not only provide an answer, but prove that it's correct. This differs from existing natural language processing, which will confidently hallucinate a plausible sounding answer – but doesn't actually know whether it's correct.

That's not to say NLP or LLMs can't be useful. DeepMind fine-tuned a Gemini LLM to translate natural language problem statements into ones the AlphaZero algorithm can interpret and use.

"When presented with a problem, AlphaProof generates candidates and then proves or disproves them by searching other possible proof steps in Lean. Every time a solution is verified it's used to reinforce the model.

The DeepMind team did this for millions of problems leading up to the IMO competition – in fact, because it's a reinforcement model, training continued throughout the competition.

AlphaGeometry 2, on the other hand, functions in much the same way as its predecessor, detailed back in January. It combines a neural language model with a "rule-bound deduction engine" and the two work together to find proofs for problems.

As DeepMind explains it, the language model is used to identify patterns and suggest useful constructs. The symbolic engine, meanwhile, uses formal logic to arrive at solutions. The downside to this approach is the second step is comparatively slow.

For its second-gen AlphaGeometry model, DeepMind explained the model's language processing is based on Gemini and trained using an "order of magnitude" more synthetic data than its predecessor. The Symbolic Engine has also been sped up considerably and is said to be "two orders of magnitude" faster.

Putting them to the test​

To put these to the test, the DeepMind team tasked AlphaProof and AlphaGeometry 2 with solving the six advanced mathematics problems faced by competitors in this year's IMO.

The competition – which dates back to 1959 and sees pre-college mathematicians tackle some of the hardest problems in algebra, combinatorics, geometry, and number theory – has become something of a proving ground for machine learning devs in recent years.

According to DeepMind, the two models were able to complete four of the six problems – AlphaProof solved two algebra problems and one number theory problem, and AlphaGeometry 2 tackled this year's geometry problem. Unfortunately, the latter was no match for the two combinatorics questions.

Tallied up, DeepMind's models still did fairly well, with a score of 28 out of 42 – the equivalent of a silver medal and one point off from gold.

However, there seems to be plenty of room for improvement. At least for this competition, DeepMind conceded it was still necessary to translate the problems given to competitors manually into a formal mathematical language the models could understand.

The models also failed to solve the majority of the problems within the allotted time period – which spans two 4.5 hour sessions, spread across two days. While the researchers were able to solve one of the problems within a few minutes, the others took upwards of three days to solve.

The DeepMind researchers are not done yet, of course. They report they are already experimenting with a natural language reasoning system, built on Gemini, that wouldn't require problems to be translated into a formal language and could be combined with other AI systems. That should speed things up a bit. ®
 
If you give Copilot the reins, don't be surprised when it spills your secrets

One hopes widely used enterprise software is secure enough. Get ready for those hopes to be dashed again, as Zenity CTO Michael Bargury today revealed his Microsoft Copilot exploits at Black Hat.

"It's actually very difficult to create a [Copilot Studio] bot that is safe," Bargury told The Register in an interview ahead of conference talks, "because all of the defaults are insecure."

Bargury is speaking twice about security failings with Microsoft Copilot at Black Hat in Las Vegas this week. His first talk focused on the aforementioned Copilot Studio, Microsoft's no-code tool for building custom enterprise Copilot bots, its defaults, and other aspects. The second covered all the nasty things an attacker can do with Copilot itself if they manage to break into the IT environment of an organization that uses the tech, and how Copilot can help someone gain entry to that environment.

Zenity, for what it's worth, offers among other things security controls for Copilot and similar enterprise-level assistants. Bear that in mind. It warns of the risks of using Microsoft's AI services here.

Your Copilot bots are quite chatty​

If you don't have much exposure to Copilot Studio, it's a tool for non-technical people to create simple conversational bots, using Microsoft's Copilot AI, that can answer people's questions using internal business documents and data. This is made possible by what's called retrieval-augmented generation, or RAG.

It's Microsoft's way "to extend [Copilot's] tentacles into other business areas, such as CRM and ERP," as we wrote here. Companies can create customer and/or employee-facing bots that provide a natural-language interface to internal information.

Unfortunately for all the Copilot Studio customers out there, we're told the default settings in the platform at least were entirely insufficient. Combine those with what Zenity marketing chief Andrew Silberman told us is nearly 3,000 Copilot Studio bots in the average large enterprise – we're talking Fortune 500-level companies here – along with research indicating that 63 percent of those are discoverable online, and you have a recipe for potential data exfiltration.

If these bots are accessible to the public, and we're told a good number of them are, they can be potentially tricked into handing over, or simply hand over by design, information to people that should not have been volunteered during conversations, it's claimed.

As Copilot bots frequently have access to internal company data and sensitive documents, it's a matter of figuring out how to fool or prompt them into disclosing that data, we're told. Bargury said he was able to do that by configuring ChatGPT to fuzz Copilot bots with automated, malformed prompts.

"We scanned the internet and found tens of thousands of these bots," Bargury said. He blamed the high online availability of these agents on default Copilot Studio settings that published them to the web without any need to authenticate to access them - an oversight Microsoft has since fixed after the Zenity team brought it to their attention.

Unfortunately, the new settings that keep Copilot Studio bots off the public internet by default currently only apply to new installations, Bargury said, so users of the suite who deployed bots before now should check their settings to be sure.

Bargury and his team have released a tool to "scan for publicly accessible Copilot Studio bots and extract information from them," which folks are welcome to use to test their environments' security. Dubbed CopilotHunter, it's available as a module in PowerPwn, a software toolkit Zenity released at Black Hat last year for probing Microsoft 365 guest accounts.

In short, if you've deployed Copilot Studio bots, check to see how available they are to the world.

While Bargury told The Reg he may have overextended himself by planning two Black Hat talks this year, his second shows no less effort than the first.

Copilot, Bargury demonstrated this week, is susceptible to indirect prompt injection attacks, which he argues is on a par with remote code execution (RCE) in terms of severity.

"An RCE is simply, from a remote location, being able to execute code that does something on your machine," Bargury said. "Indirect prompt injection that makes an AI do something on your behalf is the exact same thing with the same impact."

With access to a compromised environment, Bargury said he can instruct that Microsoft customer's Copilot "to automate spear phishing for all of your victim’s collaborators," use the technology to lure internal users to phishing pages, access "sensitive content without leaving a trace," and more.

To top it all off, it's claimed Copilot can be tricked into granting someone initial access to that environment, and conduct other malicious activities, with nothing but an email, direct message, calendar invite, or other common phishing tactic, and that this can work without the user needing to interact with it or click a link because of how Copilot scans messages. The key thing here is the indirect prompt injection: Crafting messages that the AI assistant picks up and parses to be helpful, and as a result does things it shouldn't.

"Microsoft Copilot is built on the enterprise graph," Bargury explained. Once a message, email or invite is sent it hits the graph, Copilot scans it, "and that's a path for me to start with prompt injection."

In one example, Bargury demonstrated how he was able to change banking information to intercept a bank transfer between a company and client "just by sending an email to the person."

An AI bot feature

Bargury explained to us that he sees these discoveries as indicative of the industry still being in the very early days of artificial intelligence in the enterprise, and having to face the fact that AI is changing our relationship with data.

"There's a fundamental issue here," he said. "When you give AI access to data, that data is now an attack surface for prompt injection."

If that's true, Copilot bots are by their very nature insecure since many are publicly accessible, they're tied closely to enterprise data, and are ready to spill secrets with a bit of hidden HTML or a ChatGPT-powered fuzzing bot.

"It's kind of funny in a way - if you have a bot that's useful, then it's vulnerable. If it's not vulnerable, it's not useful," Bargury said.

The Zenity CTO noted that Microsoft has been incredibly responsive to his reports, and said several of the faults he found have been addressed, albeit within limits.

"[AI] apps are basically changing in production because AI chooses to do what it wants, so you can't expect to have a platform that's just secure and that's it," Bargury said. "That's not going to happen because these platforms have to be flexible, otherwise they're not useful."

Bargury believes that securing AI software like Copilot requires real-time monitoring, inspecting conversations with the bots, and tracking potential prompt-injection RCEs. The bottom line is that businesses are the guinea pigs testing an experimental drug called artificial intelligence, and we're not at a point where we know how to make it safe yet.

Finally, Bargury and team released another testing kit called LOLCopilot for organizations that want to test their setups for vulnerability to his exploits.

"Copilot has great skills. It can search, it can enable your employees to find data they have access to but didn't know they did … those things are important," Bargury told us. "But that's not as important as preventing remote code execution."
 
Weather and climate predicted accurately — without using a supercomputer

Numerical models of Earth’s climate system, based on the laws of physics, are essential tools for predicting changes in weather and climate in our warming world. However, persistent uncertainties in the representation of small-scale processes, such as cloud formation, have resulted in only slow improvements in the fidelity of these models. Interest in using machine learning to provide a data-driven representation of these uncertain processes has therefore skyrocketed. Yet challenges have persisted in coupling these representations to conventional atmospheric models. Writing in Nature, Kochkov et al.1 report a hybrid global atmospheric model that overcomes these challenges, combining physics-based approaches with machine learning. This model not only predicts the weather up to 15 days ahead with state-of-the-art accuracy, but also simulates historical temperature trends over a 40-year period, demonstrating its ability to assess climate.

Weather forecasting accuracy scores for deterministic and stochastic models.

NeuralGCM Is the new AI supported tool.

41586_2024_7744_Fig2_HTML.png

Spoiler Legend :
a,c, RMSE (a) and RMSB (c) for ECMWF-ENS, ECMWF-HRES, NeuralGCM-0.7°, NeuralGCM-ENS, GraphCast3 and Pangu4 on headline WeatherBench2 variables, as a percentage of the error of ECMWF-ENS. Deterministic and stochastic models are shown in solid and dashed lines respectively. e,g, CRPS relative to ECMWF-ENS (e) and spread-skill ratio for the ENS and NeuralGCM-ENS models (g). b,d,f,h, Spatial distributions of RMSE (b), bias (d), CRPS (f) and spread-skill ratio (h) for NeuralGCM-ENS and ECMWF-ENS models for 10-day forecasts of specific humidity at 700 hPa. Spatial plots of RMSE and CRPS show skill relative to a probabilistic climatology12 with an ensemble member for each of the years 1990–2019. The grey areas indicate regions where climatological surface pressure on average is below 700 hPa.


Article (paywalled) Paper
 
Back
Top Bottom