Insights // 30 October 2025

Research Insights Autumn 2025

Our Researchers have been very busy over the past few months, and are gearing up to present their work from the last year at the Second Annual Summit. Ahead of this exciting event, catch up with a summary of some of their latest work and developments.  

Building EMOS: Towards an Integrated AI Platform for Electronic Materials Discovery

Over the past few months, I have been developing EMOS, a framework aimed at integrating databases, AI models, and simulation tools for electronic materials discovery. In parallel, I have been exploring the use of AI in circuit design and physics-informed neural networks for device simulation, with the goal of creating more data-efficient and interpretable approaches.
At MEMRISYS 2025, I presented a demo of EMOS and received valuable feedback from experts in the field. The event also provided excellent opportunities to connect with researchers working on AI-driven materials innovation.
Looking ahead to the November Summit, I’m looking forward to representing APRIL and demonstrating EMOS there. I hope to engage with industry partners to gather insights on pressing challenges in the field and identify areas where APRIL’s research can have the greatest impact.

Dr Atish Dixit

Bridging AI and Hardware: Intelligent Design and Energy-Efficient Computing

In recent months, the research I have been involved in has focused on several themes at the intersection of AI and electronic systems, combining theoretical development with practical experimentation to advance intelligent design and computing methodologies.
A key direction investigates AI-driven design automation for circuits and systems. This work explores how Large Language Models (LLMs) can be integrated into Electronic Design Automation (EDA) workflows, enabling designers to specify requirements in natural language and receive rapid, targeted feedback on power, performance, and area (PPA) metrics. The goal is to make circuit design more intuitive and accessible while reducing inefficiencies and design errors. The evolving toolchain targets digital, analogue, and radio-frequency domains, with my specific focus on Field Programmable Gate Array (FPGA) and digital Application-Specific Integrated Circuit (ASIC) design flows.
In parallel, I have contributed to research on hardware architectures for AI acceleration. To address the energy demands of conventional computing platforms, the Triangular Input Movement (TrIM) Systolic Array was developed—a novel architecture that enhances energy efficiency through localised data reuse and reduced memory access. Related results were published in IEEE Transactions on Circuits and Systems for Artificial Intelligence and IEEE Transactions on Circuits and Systems I: Regular Papers.
Further work explores in-memory computing using Resistive RAM (RRAM) devices—non-volatile memories capable of performing both storage and computation. A hardware–software framework for live neural network classification was demonstrated at IEEE ISCAS 2025 and MEMRISYS 2025 conferences.

Dr Cristian Sestito 

Can We Use AI to Automate Data Extraction from Research Articles?

Over the past few weeks, I’ve been working on data extraction related to the physical and mechanical properties of polymer nanocomposites — materials that play a crucial role in technologies like flexible electronics and energy-harvesting devices. During this process, I realized that much of the valuable information in research papers and patents is presented in graphical or plot form, making automated data extraction quite challenging. Moreover, while large language models (LLMs) can assist in pulling information from text, their results often lack consistency and accuracy.
Till now, I haven’t able to find any reliable AI tool that could aid me in reliable data extraction. To overcome these challenges, my current focus is on developing tools that can make data extraction from such complex sources more reliable and efficient. The goal is to bridge the gap between human-level understanding and machine-based automation, so researchers can access clean, structured data without spending endless hours digitizing graphs or double-checking extracted values. It’s an exciting step toward making scientific data more open, accessible, and usable for everyone.

Dr Shashank Mishra

Accelerating Electron Device Design Through AI-Driven Inverse Design and Automated Literature Data Extraction

The rapid evolution of electronics and computing systems demands faster and more efficient design of nanoscale devices. Traditional design workflows—heavily reliant on iterative TCAD simulations and experimental prototyping—are too slow to explore the vast design space of emerging device architectures and process variations. Accelerating device design is therefore critical to sustaining performance scaling and system-level innovation.
My work aims to accelerate the inverse design of electron devices by coupling ML-driven optimization with a scalable, fabrication-aware data engine. Starting from experimental device data, I develop and benchmark machine learning models for optimization and inverse design that balance performance, reliability, and manufacturability, while reducing reliance on computationally expensive TCAD simulations.
A key challenge in this process is acquiring large, high-quality datasets that capture realistic device behaviour and fabrication variability. Achieving robust ML-driven optimization typically requires on the order of 1k to 50k+ data points, where more data directly enhances model accuracy, generalizability, and reliability. However, experimentally generating such datasets is time-consuming and resource-intensive. An alternative, highly scalable source of data exists in heterogeneous scientific literature, which contains valuable quantitative and qualitative information scattered across text, tables, and figures. Manual extraction of this information is slow, inconsistent, and expertise-dependent. To overcome this bottleneck, I am developing an AI-based data extraction pipeline that automates the retrieval of device geometries, materials, process conditions, and performance metrics using text parsing, ML-based image digitization, and Vision-Language Models (VLMs).
This integrated approach enables scalable, data-driven design workflows that accelerate device discovery, enhance fabrication realism, and ultimately shorten the path from concept to highly advanced electronic systems.

Dr Chandrabhan Kushwah 

Machine Learning Enhanced TCAD Modelling of CMOS Devices

In order to improve convenience and reliability of IC design, an accurate knowledge of electrical characteristics of nano-scaled CMOS devices is very essential. Indeed, a microchip may contain thousands of circuits, where each will be containing many devices. So, such a large-scale knowledge is not easily available. Ideally, it is achieved through very expensive IC fabrication steps followed by characterization methods, which generally take many months. Another way is to adopt a calibrated Technology Computer-Aided Design (TCAD) modelling approach. Here, TCAD refers to the use of computer simulations to develop and optimize semiconductor manufacturing processes and devices. However, that too for several devices become time consuming. Therefore, different machine learning (ML) techniques are gaining popularity to circumvent this issue. In order to train an ML model, one must have sufficient data which can be generated through TCAD simulations.
As it is known, major design figure of merits (FoMs) of CMOS devices include threshold voltage, subthreshold swing, and others. It is true that these FoMs are related to each other through the device geometry & doping concentrations. However, any physical modeling of this inter-dependency has become sophisticated, particularly for nanoscaled devices, due to many short channel & quantum effects. Therefore, an ML-driven methodology should be adopted to investigate this inter-dependency.

Dr Joydeep Ghosh

Accelerating Circuit Design and Testing with AI and Automation 

Not long ago, circuit designers had to manually draw schematics mapping out how each component connected. It was a detailed and time-consuming process. Now, large language models (LLMs) are changing that. With just a plain-language description of what a circuit should do, AI can generate the design automatically. 

I’m part of a cross-pillar collaborative project aimed at creating a workflow that automates circuit design using commercial Electronic Design Automation (EDA) tools. I am specifically focused on Radio Frequency (RF) design. By combining these tools with an LLM, users can simply describe the kind of circuit they need, and the system takes care of generating it. This significantly speeds up the process and lowers the barrier for users who may not be familiar with traditional design tools. 

This approach doesn’t replace human expertise, as engineers still verify the results. However, it drastically reduces the time spent on the initial design phase. What used to take weeks can now be done in days, freeing engineers to focus on optimising, testing and validating their designs. 

Dr Panagiota Kontou 

The Topology of Circuits 

Circuits, as interconnected devices, are functional combinatorial structures that serve a purpose far more powerful than their constituent devices. In the past months I have been devising methods for modelling the geometric and topological properties of circuits and their components in tandem, through a general multigranular framework. Those properties not only encapsulate the circuit's connectivity, but also capture temporal dynamics of their functionality. We are building machine-learning algorithms that exploit this spatiotemporal behaviour for optimizing circuits across multiple, often competing, metrics. 

Dr Alexandros Keros 

AI-Driven Nanoelectronics: Skyrmion Computing and Inverse Design Methodologies 

My work has centred on magnetic skyrmion computing systems and their intersection with AI methodologies. The core investigation examines how machine learning techniques can inform the design of spintronic devices, and conversely, how skyrmion-based architectures might serve as computational substrates. The primary research thread involves inverse design frameworks for memristive skyrmion logic elements. The approach combines supervised learning with reinforcement learning techniques to explore design spaces for spintronic computing architectures. The goal is developing systems where memory and computation occur at the device level rather than requiring separate components. 
I presented findings on AI-driven inverse design for skyrmion systems at MEMRISYS 2025 in Edinburgh, which brought together the memristor community to explore how AI accelerates memristor technologies and how memristors enable scalable, energy-efficient computing paradigms. The conference featured live demonstrations and discussions spanning memristor theory, materials, circuits, and systems for in-memory and unconventional computing. The presentation detailed the methodological framework and initial results from the design optimization studies. The investigation continues into bi-directional relationships between AI and emerging nanoelectronic systems, particularly examining how computational intelligence can accelerate design cycles for next-generation spintronic devices. 

Dr Santhosh Sivasubramani