Technology

A Cautious Embrace of LLMs

Photo of Jake Pitre

By Jake Pitre

Aug 28, 2025

Graphic by Adam Dixon

Like virtually every industry, farmers are feeling pressure to integrate AI into their everyday workflows. But even early adopters remain wary.

“If we don’t take an interest in AI, AI is going to take an interest in us,” says Marc Arnusch of Arnusch Farms in Colorado.

Make no mistake: Digital technology, and artificial intelligence specifically, has been a key part of farming for years. What has changed, however, is the immense interest and investment in ChatGPT and other generative AI, particularly large-language models (LLMs) and chatbots. As seemingly every sector and industry rolls the dice on LLMs, it has likewise become a common discussion topic among farmers, agriculture experts, and anyone else involved in the business of food.

While hype abounds and capital flows, it’s worth pausing for a moment and assessing what LLM applications are already being used in ag, and what is being developed and forecast. Advocates say its potential for improving agricultural efficiency and addressing localized vulnerabilities makes it a game changer for the sector, while critics argue that the propensity of misinformation alongside privacy and environmental concerns means that, at the very least, far more regulation is needed.

Building for Farmers

Researchers around the world are scrambling to develop cutting-edge chatbots and LLMs for farmers, making use of various datasets and following a diverse array of ethical guidelines. David Warren is senior director of integrated digital strategies for Oklahoma State University’s Agriculture Division, and the AI program leader for the Extension Foundation, an educational ag non-profit. For many years, the foundation managed a platform called “Ask An Expert,” essentially a human-powered search engine, as users could ask a question and, hours or days later, get a response from a real expert — often one residing in the user’s state. Warren and others realized that this dataset of around 400,000 questions and answers was “an ideal training set for AI,” Warren told Offrange.

The foundation, which works closely with the U.S. Department of Agriculture (USDA), ultimately developed a chatbot after holding a series of workshops to determine what it should and shouldn’t be able to do. The intention was to offer actionable advice to focus, somewhat narrowly, on science- and fact-based outputs. ExtensionBot, as it was dubbed, launched in November 2022, and two weeks later, ChatGPT hit.

“If we don’t take an interest in AI, AI is going to take an interest in us.”

OpenAI’s model possessed the ability to pull directly from large swaths of the internet, all of which made a static, limited tool like ExtensionBot suddenly seem insufficient. “We ripped out ExtensionBot’s guts,” Warren said, “and replaced it with an open-source model.” OpenAI had changed the game, and even the foundation’s small, direct-purpose tool felt the need to respond.

The main concern, of course, is the model’s trustworthiness, especially when it comes to farmers, notoriously risk-averse when it comes to their bottom line. This is understandable, Warren said, but argues that unlike a giant model like ChatGPT, what his group can offer is something much more purposeful and specific. His LLM is trained only on verified agricultural data, unlike ChatGPT’s scraping of essentially the entire internet, which has led in part to “hallucinations” built on inaccurate or incomplete information.

By the end of 2025, Warren’s team plans to include all the data contained with the foundation’s many systems throughout the country, alongside USDA sources. But even then, Warren stressed, “We curate what goes in there, even from those sources. We have a different idea of what is trustworthy, which is research-based outputs.”

Potential Advantages

Some farmers are taking a distinctly nuanced approach to AI. Farmer Marc Arnusch says he uses LLMs like ChatGPT and Grok to “help disseminate contracts, understand business strategy, in landlord negotiations, and email composition.” He has also used an AI-assisted Farmers Business Network platform to inquire about potential strategies for wheat control, and to suggest crop protection products. Even so, Arnusch is well aware of the risks: “It’s a recommendation, not a marching order. The farmer still has to make the decision, and do some ground truthing to make sure it’s legitimate and accurate, and understand the risk. It would be no different to going to anyone else for advice: You still need to scrutinize it and make sure it’s the right decision for your operation.”

This reflects some of the first research that has emerged on LLMs in agriculture. A study published last fall concluded that “While LLMs can potentially enhance agricultural efficiency, drive innovation, and inform better policies, challenges like agricultural misinformation, collection of vast amounts of farmer data, and threats to agricultural jobs are important concerns.” Another study argued that specialized LLMs for agriculture would “provide better consulting, explanation, interpretation, and decision recommendations,” which would only work if “it is ensured that the data behind these models is location-dependent, rely on real observations, and is up-to-date” — which Warren and ExtensionBot are aiming to achieve.

“It’s a recommendation, not a marching order. The farmer still has to make the decision, and do some ground truthing to make sure it’s legitimate and accurate, and understand the risk.“

As Arnusch has spent more time with these tools, a more exacting approach has helped to clarify what they can and can’t do. “We need to be apprehensive and not embrace this technology with open arms,” he said. “It is something that we need to understand, not necessarily adopt. [We need to] understand how it’s going to change our competitiveness, our markets, our ability to make good decisions quickly.”

Jim Wally, a small-scale farmer and engineer in north Florida, has played around with ChatGPT and other tools. His assessment is that if you’re an absolute expert on a subject and you can recognize immediately whether an AI output is accurate or not, it “can be reasonable to use it as a shortcut, but I wouldn’t trust it for anything that you don’t have an incredibly good understanding of.”

Like most industries, it shouldn’t be surprising that there is a diversity of opinion among farmers about the usefulness of AI and LLMs, especially when looking ahead to the future. Arnusch said that, among the farmers he interacts with, “There’s a high adoption rate of these technologies,” especially among younger producers, but he acknowledges that his personal network may not represent U.S. agriculture at large.

Risks Abound

Wally notes that many farmers are trying AI for simple tasks like email composition. But when it comes to using chatbots for actual advice and problem-solving, he identifies three groups across the farmers he knows: the old-fashioneds who haven’t used it and aren’t certain it could help them; some that are excited about it and want to use it as much as possible, while understanding they can’t fully trust it; and, “the biggest group, it seems, is those that were interested in it, tried it out, it gave them some grievously incorrect or harmful answer, and never touched it again.” He places himself in that final category.

Warren understands these concerns, but argues that “LLMs can support broadening the reach of expert advice and making it more accessible.” For the foundation’s tool, they are avoiding the verbose answers familiar to ChatGPT users and focusing squarely on simplified, fact-based responses. It also collects no personal data, unlike the larger bots. Among other concerns, one study noted that “farmers might input increasing amounts of personal information into these chatbots, including agronomic ‘trade secrets,’ such as what they grow, how they grow it, and personal information such as age, gender, and income.” Several chatbots, including an early version of ChatGPT, have had this type of data leaked. That is surely cause for concern, but there are larger worries over how unclear it is about how this data is being used by these companies.

“If an LLM recommends a pesticide that ends up killing an entire crop, it’s hard to know who’s to blame.”

Even so, Warren admits the tool hasn’t been tested with actual farmers yet — funding for that comes next. In the meantime, while slower-paced and lower-resourced efforts struggle to take shape, the bigger players are enjoying extreme popularity. Wally, for his part, puts this growth — and its attendant dangers — in stark terms: “People will die,” he says. “It can tell you the arsenic concentration in your soil is fine, and it can kill a person.” Or, even on a non-lethal scale, “It can tell you that your calcium’s out of range and that you need to spend $10,000 on more calcium, and it’s simply not true, and there’s no way to know that besides doing the work that you’re trying to skip by using the model.”

Sarah Marquis, outreach strategist for the National Farmers Union in Canada, notes that “the farmer is not benefitting as much as the corporations are” from the imposition of AI technology on their practices. “These technologies can be helpful, but it’s the companies that have all the data, which they can make profitable use of.” While chatbots and other AI-enabled tools are still emergent for most farmers, LLMs in particular lack transparency, Marquis added, and what they advise “might seem objective and true, but [it] might not be, and the way they arrive at their answers is unclear. We encourage our members to be careful with tools like that.”

Moreover, agribusiness companies like Cargill and Bayer are seizing the opportunity to push AI tools specifically for farmers, and it becomes difficult to know their trustworthiness when their data is proprietary and kept from view. An October 2022 article in the academic journal Frontiers in Sustainable Food Systems analyzed multiple research studies and determined farmers may be right to be concerned. “Generally, farmers have limited control of their farm data by agricultural technology providers (ATPs). This raises concerns about privacy of farm data ... farmers are usually not informed about the purpose of data collection from the farm, how their data is used, and whether their data is shared with third parties. Due to the lack of control and lack of transparency, farmers are unwilling to share their data with ATPs.”

Then there is the even more elusive issue of accountability. “If an LLM recommends a pesticide that ends up killing an entire crop, it’s hard to know who’s to blame,” Marquis said. “Is the algorithm going to be held accountable, is the corporation?” From the farmer union’s point of view, there are far more long-standing and significant issues to address, namely economic dignity for farmers and improvements in safety. “LLMs are not helping with any of those outcomes,” Marquis said.

“These technologies can be helpful, but it’s the companies that have all the data, which they can make profitable use of.”

Marquis also notes the rapidly growing investment in AI data centers and the heavy toll they take on the environment, and how many of these centers are being built on rural land — land that is, or could be, agricultural. Wally calls this “a bubble” for short-term gain, regardless of long-term consequences. Arnusch suggests that there are rural economic development opportunities to these centers but, at the same time, the loss of agricultural land is an obvious detriment. Either way, he says, “We simply have too many businesses boarded up, and we see a hollowing out of our rural towns” — ideally, the centers will help spur the development these areas need.

As AI tools and LLM chatbots continue their growth, whether via large tech corporations, research-based academic prototypes, or UN-funded global platforms, farmers across America and beyond are facing an uncertain future overtaken by technologies bearing a lot of hype and promise. As Wally points out, this reflects a “broader historical trend of Silicon Valley being really sure that they’re smarter than everyone else in the world and they’re going to solve a problem in an industry they know nothing about.”

Wally does think farmers remain open to anything that can help improve their business. At the same time, “I imagine that a lot of farmers are probably a little insulted by the idea that they need [AI] to tell them how to run their farm.”

Author


Photo of Jake Pitre

Jake Pitre

Jake Pitre is a scholar and writer living in Montreal. His writing has appeared in The Globe and Mail, The Atlantic, Fast Company, and elsewhere.

Illustrative image of a person looking out a window at a field

Subscribe to Offrange

Sign up to get a weekly roundup of our original reporting, along with food & ag news from around the web.

Subscribe

Stories just beyond the fence line.

Ambrook