How scientists and engineers can stay ahead in the age of AI
Learn how scientists and engineers can thrive in the AI era. Explore the primary differences between classic and generative AI and key strategies to prepare for the future.
Katie Stone
August 12, 2025
5 min. read
Let’s face it - AI has already made itself at home in science and engineering, reshaping how we work. While AI handles the repetitive, brain-numbing stuff, it frees scientists and engineers to think big, solve real problems, and maybe even get home on time. But that doesn’t mean there aren’t new challenges and risks to navigate.
We chatted recently with Russ Wolfinger, Ph.D., Director of Scientific Discovery and Genomics at JMP, to help better understand the effects that AI could have on day-to-day tasks for scientists and engineers. We captured the highlights in a white paper that offers clear guidance for working with AI in this time of transition.
The Top Seven Ways Scientists and Engineers Should Prepare for the AI-Driven Era
Download the full white paper. No registration required.
What is the difference between classic AI and generative AI?
-
Classic AI is a more traditional form of AI that excels at performing specific, predefined tasks by following a set of rules or algorithms. Classic AI is commonly used for tasks like predictive modeling, data analysis, spam filtering, fraud detection, and recommendation systems. It often requires specific training for each unique task and can be more transparent and interpretable in its decision-making process.
"Virtual assistants like Siri or Alexa, Netflix's recommendation system, and computer chess programs all fall under the category of traditional AI. These systems are highly efficient within their defined boundaries but lack the ability to create new content." (Source: MIT xPRO) -
Generative AI goes beyond analysis to create new, original content, such as text, images, code, or music. It learns from vast data sets to identify patterns and generate new outputs based on a prompt. Generative AI (GenAI) systems, like those based on large language models (LLMs), are highly adaptable and can be used for a wide range of creative tasks. However, these models can often be less transparent due to their complex learning algorithms and may require more substantial computational resources. They are also prone to hallucinations, which can limit their reliability in high-stakes or technical applications.
"Platforms like ChatGPT, Gemini, and Claude can engage in human-like conversations, while an image generator like DALL·E creates images from text descriptions. This creative aspect sets generative AI apart, opening up possibilities for innovation across various fields, including entertainment, design, and even scientific discovery." (Source: MIT xPRO)Wolfinger explains, "Generative AI is getting all of the attention and hype with the likes of ChatGPT and others based on large language models (LLMs). We are now routinely passing the traditional Turing test of not knowing if you are communicating with a human or machine."
I would encourage folks to focus on more classic methods like image and text classification, as they are now very well-vetted and reliable and can be trained readily on common hardware. There are tons of potential applications of classic methods yet to be explored and leveraged for success.
How can scientists and engineers strengthen their thinking in the AI-era?
AI can significantly boost productivity if the user understands what the AI is doing and can validate or interpret results. Otherwise, risks increase:
- An engineer may use AI to optimize a process and implement a solution that isn’t feasible in practice.
- A scientist could use AI to summarize journal articles and miss critical flaws in methodology that can affect conclusions.
- Cognitive offloading and reliance on AI could be dangerous, especially in regulated industries where safety or compliance could be critical (e.g. pharmaceuticals, aerospace, or semiconductors).
To counter these risks, scientists and engineers can apply strategies such as:
- Learning how to interpret AI models, not just use them. Understanding their assumptions, limitations, and diagnostics.
- Encouraging skeptical engagement. Asking engineers to challenge, replicate, and cross-check AI outputs.
- Building critical thinking into your culture. Rewarding questioning and collaboration, not just quick answers.
- Promoting experimental thinking. Training teams in DOE, root cause analysis, and iterative refinement.
How can scientists and engineers use AI while still preserving their critical thinking skills?
In a study conducted by Microsoft on the The Impact of Generative AI on Critical Thinking, findings indicate that the 319 knowledge workers surveyed "engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem solving."
The effort put into critical thinking is reduced as confidence in GenAI’s ability to perform a task grows. The researchers suggest that the developers of GenAI tools should survey users to gain greater insight into how specific tools can evolve to better support critical thinking in different tasks. Their work suggests that "GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers."
Without continued effort to preserve critical-thinking skills, scientists or engineers might not possess the ability to question assumptions, design experiments, interpret results, or adapt based on unexpected outcomes.
Download the white paper to get more great insights. No registration necessary.