Publications

Our vision is to rejuvenate modern electronics by developing and enabling a new approach to electronic systems where reconfigurability, scalability, operational flexibility/resilience, power efficiency and cost-effectiveness are combined. Below is a list of our current publications helping us work toward our vision.

January 2026

PREPRINT: A Flexible Language Model-Assisted Electronic Design Automation Framework

Large language models (LLMs) are transforming electronic design automation (EDA) by enhancing design stages such as schematic design, simulation, netlist synthesis, and place-and-route. Existing methods primarily focus these optimisations within isolated open-source EDA tools and often lack the flexibility to handle multiple domains, such as analogue, digital, and radio-frequency design. In contrast, modern systems require to interface with commercial EDA environments, adhere to tool-specific operation rules, and incorporate feedback from design outcomes while supporting diverse design flows. We propose a versatile framework that uses LLMs to generate files compatible with commercial EDA tools and optimise designs using power-performance-area reports. This is accomplished by guiding the LLMs with tool constraints and feedback from design outputs to meet tool requirements and user specifications. Case studies on operational transconductance amplifiers, microstrip patch antennas, and FPGA circuits show that the framework is effective as an EDA-aware assistant, handling diverse design challenges reliably.

Dr. Cristian Sestito Dr. Panagiota Kontou Dr. Pratibha Verma Dr. Atish Dixit Prof. Michael O'Boyle Prof. Christos Bouganis Prof. Themis Prodromakis
August 2025

Design Methodologies for Skyrmion-Based Circuits and Systems in AI-Driven Applications: Bi-Directional Integration [Feature]

Magnetic skyrmions are nanoscale bubbles that emerge in specific materials due to unique magnetic interactions. These structures are not only stable but also require minimal energy to manipulate, positioning them as promising elements for next-generation electronic devices. Their inherent properties such as topological stability, low power requirements, and potential for miniaturization make them ideal for developing energy-efficient computing systems, including logic gates, arithmetic units, and in-memory computing architectures, as well as artificial neurons. This feature review explores the extensive range of applications for skyrmions, emphasizing their role in both fundamental computing operations and complex AI systems. It discusses the dynamics of skyrmions and their integration into basic and advanced AI architectures. The review aims to synthesize recent progress in the field of skyrmionics, highlighting their capability to revolutionize future computing technologies by improving energy efficiency and system scalability. Furthermore, the review explores the reciprocal relationship between AI and skyrmion technology. It examines how AI can enhance the understanding and optimization of skyrmion systems, thereby boosting their effectiveness in AI applications. Conversely, it also considers how skyrmion-based technologies facilitate advancements in AI, creating a bi-directional flow of benefits. This dual focus not only underscores the versatility of skyrmions in AI contexts but also highlights the symbiotic advancements achievable through this emerging technology integration leading to pathways in AI for skyrmion along with skyrmion for AI.

Prof. Themis Prodromakis Dr. Santhosh Sivasubramani (He/Him) Prof. Vihar Georgiev Prof. Rishad Shafik
July 2025

TrIM, Triangular Input Movement Systolic Array for Convolutional Neural Networks: Dataflow and Analytical Modelling

In order to follow the ever-growing computational complexity and data intensity of state-of-the-art AI models, new computing paradigms are being proposed. These paradigms aim at achieving high energy efficiency by mitigating the Von Neumann bottleneck that relates to the energy cost of moving data between the processing cores and the memory. Convolutional Neural Networks (CNNs) are susceptible to this bottleneck, given the massive data they have to manage. Systolic arrays (SAs) are promising architectures to mitigate data transmission cost, thanks to high data utilization of Processing Elements (PEs). These PEs continuously exchange and process data locally based on specific dataflows (such as weight stationary and row stationary), in turn reducing the number of memory accesses to the main memory. In SAs, convolutions are managed either as matrix multiplications or exploiting the raster-order scan of sliding windows. However, data redundancy is a primary concern affecting area, power, and energy. In this paper, we propose TrIM: a novel dataflow for SAs based on a Triangular Input Movement and compatible with CNN computing. TrIM maximizes the local input utilization, minimizes the weight data movement, and solves the data redundancy problem. Furthermore, TrIM does not incur the significant on-chip memory penalty introduced by the row stationary dataflow. When compared to state-of-the-art SA dataflows, the high data utilization offered by TrIM guarantees $\sim 10\times$ less memory access. Furthermore, considering that PEs continuously overlap multiplications and accumulations, TrIM achieves high throughput (up to 81.8% higher than row stationary), other than requiring a limited number of registers (up to $15.6\times$ fewer registers than row stationary).

Dr. Cristian Sestito Prof. Themis Prodromakis
June 2025

Live Demonstration: Hardware/Software Co-Design to Exploit RRAM Programmability for Emerging Edge Classification Using ArC TWO

In this demonstration, we present a hardware/software co-design methodology for Convolutional Neural Networks, where the classification section is managed through Resistive RAMs (RRAMs). To this aim, RRAM arrays are mounted onto the ArC TWO instrumentation board, which is interfaced to a laptop. A software Python front-end executes convolutional layers for feature extraction, generates stimuli for RRAMs, and controls the instrumentation board. As a proof of concept, handwritten digits classification is exhibited.

Dr. Cristian Sestito Prof. Themis Prodromakis
June 2025

Reaching new frontiers in nanoelectronics through artificial intelligence

Artificial Intelligence (AI) is revolutionizing industries worldwide, delivering unprecedented productivity gains across diverse sectors, from healthcare to manufacturing. Recent advances in generative AI models have particularly accelerated innovation, enabling more efficient execution of complex tasks such as drug discovery, autonomous driving, and predictive maintenance. In the areas of electronics manufacturing: a sector crucial to the advancement of modern technologies, the impact of AI is profound, with the potential to transform every stage of the supply chain. This perspective investigates the role of AI in reshaping the electronics and semiconductor industries, exploring how it integrates into various stages of production and development. The approach to AI integration is structured and methodical, addressing both challenges and opportunities across five key nanotechnology areas: materials discovery, device design, circuit and system design, testing/verification, and modeling. In materials discovery, AI aids in identifying new, more efficient and sustainable materials. In device design, it enhances the functionality and integration of components. AI’s capabilities in circuit and system design enables more complex and precise electronic systems. During the testing and verification stage, AI contributes to more rigorous and faster testing processes, ensuring reliability before market release. Finally, in modeling, AI’s predictive capabilities allow for accurate simulations, crucial for anticipating performance under various scenarios. Each pillar of this electronics supply chain underscores AI’s ability to accelerate processes, optimize performance, and reduce costs. Supported by case studies of AI-driven breakthroughs, this perspective provides a comprehensive review of current AI applications across the entire electronic supply chain, illustrating improvements in yield and sustainable manufacturing practices.

Prof. Themis Prodromakis Dr. Santhosh Sivasubramani (He/Him)
May 2025

Generative Process Variation Modeling and Analysis for Advanced Technology Based on Variational Autoencoder

The development of advanced technology nodes highlights the significant impact of process variations on device electrical characteristics. To analyze and understand these variations, extensive and intensive technology computer-aided design (TCAD) simulations or costly on-wafer testing are often indispensable. This article proposes a novel generative process variation modeling method to alleviate this burden, which can learn from a few discrete sampling points and reproduce or generate analytical electrical responses for variation analysis and circuit simulation without requiring predefined domain knowledge or empirical equations. A silicon nanowire (NW) transistor is employed to showcase the strength of the proposed method, considering two complex process variabilities: metal grain granularity (MGG) and random discrete dopants (RDDs). The trained models on Id – Vg curves achieve median percentage errors of 0.7% (best case) and 2.8% (worst case). Furthermore, the proposed framework is transfer-learnable, allowing data from a new variability source to be added to a trained model, resulting in even greater accuracy and further reducing the cost of data collection.

Prof. Vihar Georgiev Dr. Ankit Dixit
March 2025

Nano-ionic Solid Electrolyte FET-Based Reservoir Computing for Efficient Temporal Data Classification and Forecasting

Physical dynamic reservoirs are well-suited for edge systems, as they can efficiently process temporal input at a low training cost by utilizing the short-term memory of the device for in-memory computation. However, the short-term memory of two-terminal memristor-based reservoirs limits the duration of the temporal inputs, resulting in more reservoir outputs per sample for classification. Additionally, forecasting requires multiple devices (20–25) for the prediction of a single time step, and long-term forecasting requires the reintroduction of forecasted data as new input, increasing system complexity and costs. Here, we report an efficient reservoir computing system based on a three-terminal nano-ionic solid electrolyte FET (SE-FET), whose drain current can be regulated via gate and drain voltages to extend the short-term memory, thereby increasing the duration and length of the temporal input. Moreover, the use of a separate control terminal for read and write operation simplifies the design, enhancing reservoir efficiency compared to that in two-terminal devices. Using this approach, we demonstrate a longer mask length or bit sequence, which gives an accuracy of 95.41% for the classification of handwritten digits. Furthermore, this accuracy is achieved using 51% fewer reservoir outputs per image sample, which significantly reduces the hardware and training cost without sacrificing the accuracy of classification. We also demonstrate long-term forecasting by using 50 previous data steps generated by an SE-FET-based reservoir consisting of four devices to predict the next 50 time steps without any feedback loop. This approach results in a low root-mean-square error of 0.06 in the task of chaotic time-series forecasting, which outperforms the standard linear regression machine learning algorithm by 53%.

Prof. Merlyne De Souza
December 2024

Neural Ordinary Differential Equations for Predicting the Temporal Dynamics of a ZnO Solid Electrolyte FET

Efficient storage and processing are essential for temporal data processing applications to make informed decisions, especially when handling large volumes of real-time data. Physical reservoir computing provides effective solutions to this problem, making them ideal for edge systems. These devices typically necessitate compact models for device-circuit co-design. Alternatively, machine learning (ML) can quickly predict the behaviour of novel materials/devices without explicitly defining any material properties and device physics. However, previously reported ML device models are limited by their fixed hidden layer depth, which restricts their adaptability to predict varying temporal dynamics of a complex system. Here, we propose a novel approach that utilizes a continuous-time model based on neural ordinary differential equations to predict the temporal dynamic behaviour of a charge-based device, a solid electrolyte FET, whose gate current characteristics show a unique negative differential resistance that leads to steep switching beyond the Boltzmann limit. Our model, trained on a minimal experimental dataset successfully captures device transient and steady state behaviour for previously unseen examples of excitatory postsynaptic current when subject to an input of variable pulse width lasting 20–240 milliseconds with a high accuracy of 0.06 (root mean squared error). Additionally, our model predicts device dynamics in ∼5 seconds, with 60% reduced error over a conventional physics-based model, which takes nearly an hour on an equivalent computer. Moreover, the model can predict the variability of device characteristics from device to device by a simple change in frequency of applied signal, making it a useful tool in the design of neuromorphic systems such as reservoir computing. Using the model, we demonstrate a reservoir computing system which achieves the lowest error rate of 0.2% in the task of classification of spoken digits.

Prof. Merlyne De Souza