In the mid-1970s, a friend in my undergraduate engineering program spoke excitedly about computer programs and modelling to build process control systems that could virtually operate complex metallurgical plants. But I was a mining engineer, so I patted his head and said, “That is nice.” Five years later, I was introduced to a complex computer program which used drill hole data to build and then virtually “mine” ore deposits. Its results were stunning and hugely labour-saving. I was hooked on the new technology but feared that my job might be at risk, so I started to promote myself as an expert in computer modeling. But then something marvelous happened. Management caught the gist of what the computer could do and inundated me with questions and suggested variations in commodity prices and market demand. Within a year, our group had built over one hundred reserve models and run over two hundred and fifty cash flow schedules. A monster had been created, and we hired more people to feed it..BC nurse suspended, fined nearly $94,000 over ‘I ♥ JK Rowling’ billboard.Forty years ago, the term “artificial intelligence” had not entered the lexicon. In those days, we talked about process control systems and geological resource models and mine design optimizers, which were, in fact, artificial intelligence. But were they then and are they now uniquely intelligent? No, they are not. If the database is inaccurate, the model will rapidly spit out wildly inaccurate information. Nevertheless, the high-speed process of getting the wrong answer is always remarkable.Today’s artificial intelligence or large language models are categorically no different than the computer block models used in the mining industry. The computer is fed as much data as can be scraped together, the data is conditioned by special programs so that it is suitable for manipulation, some statistical hokey pokey is applied and voila. A model is produced that seems to have a mind of its own. In the case of the mining industry, the language it speaks is numerical instead of English, but the speed by which it produces a complex answer is, once again, remarkable..I am, therefore, not a believer in the breathless nonsense about “AI” and “large language models”. These are just new types of computer models based on enormous as opposed to huge amounts of data. But the difference in the models lies in the amount of data rather than the averred ability to “think”. Computers do what they are told to do. They do not think, regardless of appearances to the contrary. Sorry Dr. Kurzweil, there will be no singularity. Sorry Mr. Musk, your robots won’t kill me unless they are told to do so. If someone kills themselves at the insistence of ChatGPT, then this says more about the intelligence of the deceased than that of the chatbot. Sorry. It just does.The issue for both mining industry models and large language models is the unpredictability of the model results where there is limited data. If you tell the mine model to interpolate values where there is limited data, it will do so. If you ask ChatGPT to answer a question for which it has limited data, it will do so. Today we call this hallucinating. In my day, we called it the “spotted dog effect.” Same thing..EDITORIAL: CBC's gaslighting defence of ‘two-tier’ justice for migrants.Herein lies the real problem with artificial intelligence and models of all kinds (think climate predictive models). When your Grok-created news item offers salacious details about a particular politician, it is very tempting to publish the result and await the Pulitzer Prize. When your computer model interpolates fantastically high grades in areas of low data, it is very tempting to publish the results and sell your stock at an inflated price. When your climate model predicts an average temperature increase of five degrees, it is very tempting to apply for a new grant. There are few incentives to check the results of the computer output. It is AI, and rapidly produced, therefore it must be valid. There is nothing new under the sun..But for our world to work, we must pull back the curtain and check the results. How do we pull back the curtain? In the case of large language models, check the references to the query you have made. Ninety percent of the time, the response comes directly from Wikipedia, and very often it is wrong because the data was taken out of context. Speaking of Wikipedia, a year ago, they informed me that Elon Musk and Sam Altman, like everyone else, have free access to their information. A couple of months ago, they further informed me that these gentlemen and their companies are being billed for the dramatic increase in servers required to maintain reasonable Wikipedia response times. It is reminiscent of the hollowing out of the news business by Google and Meta, who pay nothing for the content created by a business now on life support..EDITORIAL: Prairie backbone: Sask, Alberta say yes to freedom, rest of Canada says no.Musk and his team use large amounts of public energy to gather data from Wikipedia to answer queries using Wikipedia content. What a scam. Perhaps that is the dark nature of the singularity.