-
Notifications
You must be signed in to change notification settings - Fork 50
POSIX Experiment
FPGA's can in theory do everything traditional CPU systems can do (i.e. trivially could implement a CPU). But there are so many layers of abstractions built up in software that hardware designers often feel like they are designing from the ground up when trying to mimic any truly high level functionality. Hardware is hard.
High Level Synthesis (HLS) tools have come to the rescue and maybe they work for you.
But what if there was some kind of familiar POSIX looking interface that made Host+FPGA resources available easily, allowing developers to focus on the needs of their particular application? Written in a "hardware description language" but still able to be hardware agnostic enough to be "cross platform". Can PipelineC be even more software developer friendly?
So what does an FPGA POSIX (FOSIX?) implementation even look like? Well, it starts off as just a conceptual packet/network protocol which could be implemented any number of ways. The current example uses an Amazon EC2 F1 FPGA instance. AXI4 DMA is used as the transport for the protocol to/from the instance host operating system.
Needed some way to communicate system call arguments and return values over arbitrary hardware links. As opposed to specifying a hardware implementation, specify a serializable protocol that could be parsed in hardware any number of ways.
For example, consider the open
system call.
int open(const char *path);
The input is characters specifying a path, and the return value is the opened file descriptor. A request to open 'packet' might be packed into bytes like so:
OPEN REQUEST
Byte[0] = SysCall ID (=OPEN)
Byte[1] = Path len
Byte[2+] = Path characters
The response from open might be packed into bytes like so:
OPEN RESPONSE
Byte[0] = SysCall ID (=OPEN)
Byte[1] = File Descriptor
Byte[2] = perror code?
The specifics of the current protocol implementation are shown here. The point being that there is a packet based interface when system call data is being moved around. Once the data reaches the hardware an easier to use interfaces use be constructed as needed.
The protocol can be parsed/streamed from/to hardware modules as needed. For example, if you expect systems calls to return large amounts of data (ex. many byte read() response) then perhaps the hardware interface should include a streaming interface.
typedef struct read_resp_t
{
size_t nbytes; // Number of bytes that will be streaming over the AXIS interface below
axis32_t axis; // Valid+data, etc bus for sending packets of data
} read_resp_t;
Or perhaps you are always reading some smaller fixed size of data, ex. 0-16 bytes
typedef struct read_resp_t
{
size_t nbytes; // Number of bytes out of 16 that are valid below
uint8_t data[16]; // Up to 16 bytes of data
} read_resp_t;
Again, the point being that the hardware facing interface is separate from the protocol parsing, etc.
So far we have talked about how these system call requests and responses can be passed around as packet data. However, FPGA on chip resources could also be accessed through identical system call interfaces.
Imagine open
ing something like a /dev/bram0 , or /dev/ddr3_0 and using familiar POSIX interfaces to write and read data. That is what is done in this example.