Since its launch last year, the APRIL AI Hub has gone from strength to strength. Our team has shared their work at major conferences, contributed to outreach initiatives, welcomed talented interns, and produced resources that have reached schools across the UK and beyond. All the while, they’ve been making significant progress in their research.
Here’s an overview of what they’ve been working on:
Pioneering AI-Driven Nanoelectronics: From intelligent Systems to Skyrmion Computing
Over the past six months, my research has spanned both collaborative APRIL initiatives and specialized investigations into magnetic skyrmion computing systems. We have been exploring how artificial intelligence can transform the entire electronics manufacturing ecosystem. This effort resulted in a comprehensive perspective paper published in Frontiers in Nanotechnology, which addresses AI's role across five critical pillars of the electronics supply chain, from materials discovery and device design to circuit optimization, testing procedures, and predictive modeling. Our framework demonstrates how machine learning can accelerate innovation, improve manufacturing yields, and promote sustainable practices throughout the semiconductor industry using industrial case studies [DOI: 10.3389/fnano.2025.1627210].
My research focus has centered on the emerging field of skyrmion-based computing architectures. Magnetic skyrmions, as topologically protected nanoscale structures, offer unique advantages for next-generation electronics due to their stability and low-power manipulation capabilities. I've been developing novel design methodologies that explore the bi-directional relationship between AI and skyrmion technology, investigating both how AI can optimize skyrmion systems and how skyrmion-based devices can advance AI hardware. This work culminated in a publication in IEEE CAS Magazine, presenting comprehensive insights into "AI for Skyrmions and Skyrmions for AI – bidirectional integration" [DOI: 10.1109/MCAS.2025.3587613]. Additionally, I've been pioneering AI-driven inverse design frameworks for memristive skyrmion logic elements, combining supervised learning with reinforcement learning to enable real-time design of energy-efficient spintronic computing architectures that merge memory and computation at the device level [To be Presented in the MEMRISYS 2025 Conference].
To put this in perspective for a broader audience, imagine traditional computer chips where information is stored and processed separately, requiring energy-intensive data transfer. Skyrmions represent a revolutionary approach where these nanoscale magnetic "bubbles", smaller than 100 nanometers, can simultaneously store and process information within the same device. Unlike conventional electronics that lose information when powered off, skyrmions maintain their state indefinitely while consuming dramatically less energy. When combined with memristors (memory resistors that "remember" their resistance state), we create adaptive computing systems that can learn and evolve, much like biological neural networks. This breakthrough could lead to smartphones that never need charging, computers that instantly boot up, and AI systems that operate with unprecedented efficiency, all while occupying a fraction of the space of current technology.
Dr. Santhosh Sivasubramani
Joining APRIL and My Research
I was drawn to APRIL because of its focus on applying AI to real-world engineering challenges and its collaborative, impact-driven research environment. My background lies at the intersection of embedded systems, RF sensing, and electronics testing—domains that strongly align with APRIL’s mission. I’m particularly interested in how AI can enhance system-level design, testing, and verification, making these processes more efficient, adaptive, and scalable.
At APRIL, I plan to contribute to the development of intelligent tools and methods that enable more robust and automated electronic system design and testing. I’m passionate about translational research and advancing AI-enabled technologies toward practical application. The opportunity to work in a multidisciplinary setting and help push the boundaries of how AI supports innovation across industries is especially exciting to me.
Dr Panagiota Kontou
Why I Joined APRIL and My Research Interests
As a Design Verification Engineer with a strong foundation in System Verilog, UVM, and AI, I joined the APRIL AI Hub to explore the intersection of machine learning and hardware verification. The opportunity to contribute to next-generation semiconductor verification solutions, especially in collaboration with academic and industry leaders, deeply aligned with my career aspirations.
My primary research interests lie in automating and enhancing micro-electronic design verification using AI/ML techniques—particularly reinforcement learning, predictive modelling, and anomaly detection in verification environments. I aim to investigate how large-scale verification workflows can be optimized through intelligent tooling and hybrid data-driven approaches, reducing time-to-tape out and improving verification coverage. I'm excited to contribute to the APRIL community and develop impactful tools that bridge the gap between traditional DV methodologies and modern AI capabilities.
Dr. Sachin Raj Chowdary
Modelling and Digital Twins
In order to improve convenience and reliability of IC design, an accurate knowledge of electrical characteristics of nano-scaled CMOS devices is very essential. Indeed, a microchip may contain thousands of circuits, where each will be containing many devices. So, such a large-scale knowledge is not easily available. Ideally, it is achieved through very expensive IC fabrication steps followed by characterization methods, which generally take many months. Another way is to adopt a calibrated Technology Computer-Aided Design (TCAD) modelling approach. Here, TCAD refers to the use of computer simulations to develop and optimize semiconductor manufacturing processes and devices. However, that too for several devices take may take couple of months. Therefore, researchers employ different machine learning (ML) approaches to circumvent this issue. As one knows, in order to train a ML model, one must have sufficient seen data and test it on unseen data to check its efficiency. However, it is not straight-forward to have access to large amount of fabrication data from foundries.
Out of many applications of ML in semiconductor industry, one is the device optimization technique. By varying design parameters, this work aims to create a database of properties like threshold voltage, subthreshold swing, transconductance, and drain-induced barrier lowering for a nanosheet FET using calibrated TCAD simulations. Indeed, different ML models can be trained with this data and check corresponding prediction accuracy thereafter. Then, robustness of such a model can be evaluated if such predicted values match to real fabricated devices found from literature.
Dr Joydeep Ghosh
An Integrated Workflow Leverages Language Models to Streamline Analog Accelerator Development
High level performance targets are provided to an LLM, which generates a SPICE netlist for a memristor based in memory systolic array. A scripting layer automates Spectre simulations, extracts key metrics and constructs concise prompts for the LLM to iteratively refine device dimensions. This closed loop cycle from specification through netlist generation, simulation, and AI driven parameter tuning compresses what is traditionally a multi week manual process into a matter of hours.
By embedding AI directly into the design loop, this methodology lowers the expertise barrier and accelerates analog accelerator iterations. The result is a more accessible and efficient path from concept to tape out for energy efficient edge AI hardware.
Dr Pratibha Verma
Building Intelligent Systems for FinFET Design and Data Extraction
Over the past few months, my research has focused on developing a hybrid, modular pipeline for extracting critical parameters from FinFET literature. This system combines OCR (Tesseract), rule-based and LLM-assisted parsing (GPT-4), and unit normalization to automate the curation of experimental data from scanned PDFs and datasheets. The resulting Python-based framework is highly modular and designed for integration with cloud platforms such as Azure, enabling large-scale data storage, backup, and analysis. The system is enhanced with a local Streamlit GUI and semantic filtering using SciBERT to distinguish between simulation and experimental work—streamlining the search for data relevant to device design and benchmarking. A key milestone was designing a custom search engine using open APIs like Semantic Scholar and arXiv, backed by Azure Blob Storage and Functions for scheduling and scalability.
In parallel, I’ve been working on a TCAD-augmented machine learning framework for predictive FinFET design. This involves using autoencoders trained on synthetic simulation data to extract latent features, enabling downstream regression models to predict critical parameters like Vth, Ion/Ioff, and Lg, and support inverse design workflows. This approach aims to reduce reliance on trial-and-error experimentation by leveraging simulation, ML, and empirical trends. I’ve also supported three intern-led projects spanning AI-driven TCAD calibration, physics-informed RL control, and LLM-based functional verification.
Dr Chandrabhan Kushwah
Materials Discovery
Materials discovery is facing many challenges, from accurately predicting atom dynamics and emergent electronic properties, to synthesizing the predicted materials in the lab. Computational topology and geometry can aid in the analysis of the high-dimensional dynamic data produced by atomic simulations, or monitored in the lab through various imaging techniques. My research focuses on studying configurations spaces of atoms as a means to improve Machine Learning structure and property predictors.
Beyond materials discovery, I collaborate with the fellow APRIL Hub researchers and our Deepmind interns to apply topological and geometric methods for the discovery of beyond-CMOS technologies, and for circuit design and automation.
Dr Alexandros Keros
AI-Driven Design of Polymer Nanocomposites for Smarter, Flexible Sensors
Flexible sensors are at the heart of next-gen tech—from wearable health monitors to artificial skin for robots. But what makes these sensors both flexible and functional? The secret lies in polymer nanocomposites -materials that blend the stretchiness of polymers with the high-performance features of nanoscale fillers like carbon nanotubes, graphene, or metal nanoparticles. These tiny additives give the polymer new powers, like better conductivity, sensitivity to pressure or strain, and improved durability. The result? Lightweight, bendable sensors that can detect changes in their environment with incredible precision.
Designing these nanocomposites, though, isn’t easy. It’s a bit like baking the perfect cake, where every ingredient and step—filler type, concentration, dispersion quality, processing temperature—affects the final outcome. That’s where AI steps in. Instead of relying on endless trial-and-error experiments, I am now trying to use machine learning models to predict how different material combinations will behave. AI can analyse vast datasets, learn complex patterns, and recommend optimal recipes for polymer nanocomposites tailored to specific sensing needs. Even more exciting, in some years, I think that with generative AI models and high-throughput labs working together, we will move toward a future of fully automated material design. That means faster development, smarter sensors, and new possibilities in flexible electronics—all powered by the synergy of nanotechnology and artificial intelligence.
Dr Shashank Mishra