DEEPip – Deep Learning on FPGA Fabric

Deep Neural Network IP Core for FPGAs

With DeepIP, you can deploy your machine learning model on a Xilinx FPGA in one day.  

DeepIP creates a direct fabric neural network with the lowest latency and power consumption available for an embedded platform, and still leaves plenty of resources for higher level applications. Deploying DeepIP is significantly faster and cheaper than creating your own ASIC.

DeepIP will help you add deep learning in your FPGA project without needing to know the mathematics behind AI.

We want to help FPGA developers get AI out of the simulations and in the field as quickly as possible. Our tools support seasoned veterans as well as those new to AI to rapidly turn their ideas into deployable platforms.

We created DeepIP for our own use and realized this will help designers of all experience levels. Now you can focus on training not on writing code.

Our R&D teams are always creating new proof-of-concept designs and testing AI models in real world environments. We developed DeepIP to reduce the time required to get Deep Learning out of the lab and onto embedded platforms. With DeepIP, you can easily test a new model by running a script on your KERAS or MATLAB model and importing it into Vivado. You can quickly adjust inputs, outputs, and layers for your model and then evaluate its performance in the field.


Adding AI to your FPGA design can be done in minutes

DeepIP uses doesn’t require a DPU so you can use it on both new and existing platforms

The IP core is designed to work using FPGA fabric. You do not need any special hardware. This means you can implement deep learning on most standard Xilinx FPGAs, and even evaluate using Deep Learning on your existing designs without having to upgrade your chipset.

Here’s an example we’re using to classify 5G radio signals

Using our DeepRadio platform, we integrated DeepIP to perform modulation classification to identify radio various signals including 5G.

Signal Classifier Resource Utilization

DeepIP Core Resource Usage for 5G Classifier

Resource Utilization Available Utilization %
LUT 29511 274080 10.77
LUTRAM 654 144000 0.45
FF 39297 548160 7.17
BRAM 122 912 13.38
DSP 190 2520 7.54
IO 78 204 38.24
BUFG 12 404 2.97

We are offering a free trial so you can get Deep Learning into your FPGA project right away with no risk.

To get your free trial, click here and fill out the form. We will follow up with additional information, user guide, and get you get the IP with trial license.

Modulation Classifier Application Detects:

  • 5G
  • OFDM
  • MOD

Product Details

Use Cases
  • Computer vision
  • Natural language processing
  • Wireless communications
  • Embedded security
What’s Included
  • Conversion script
  • Flexible neural network architecture
  • User guide
  • Demo application
  • Trial license
Additional Services
  • Deploy software
  • Hardware and software design services
  • R/D Services