For computer vision cars-vehicle automatic driving. We want to preprocess data in real time with the information provided by sensors(vehicle inertial sensors, tacometers and stuff). What is called "Sensor fusion", something very similar to what humans(or any animal) does.
We already use commercial cameras, but the ability to integrate directly our hardware(our own FPGAS, in the future ASICs) with cameras low level is simply very tempting. In software we use Linux for the same reason, there is no way you could integrate that much with proprietary software.
We need real raw data. Not raw data already preprocessed by the manufacturer, and total control over it, specially lighting. A vehicle is moving and if the preprocessor could not handle the change of lighting entering-exiting a tunnel fast enough on a bright day people could die. You need total control and stability and repetitivity.
Without this we will have to design all by ourselves, which is very expensive.
It looks like they are focusing on cinema so we are not sure about this, but it could be a very interesting possibility to explore.
Exactly. For a custom imaging pipeline, constructed via application-specific design, every element that is a black box invalidates the measurement aspect of the output. The sensor control system is an intimate part of that system. For example, machine vision cameras typically allow alteration of imaging parameters, but at the cost of lost time meaning dropped frames. So rapid iteration to convergence, or rapid cycling thorough parameter settings, means that the device might go down to 5fps, or 1fps. Magic Lantern (collaborators with AXIOM) corrected exactly this kind of defect for certain Cannon DSLRs, among many other things.
It is possible to partner with sensor or camera makers, in order to get the required specs and level of integration, but it is extraordinarily expensive. And only a handful of companies can do this. For the individual, it's out of the question to partner with a camera maker. So for the interested individual, AXIOM provides a real benefit. And for commercial development, companies like FLIR Integrated Imaging Solutions (formerly Point Grey Research) exist, but still lock down drivers and control firmware. And you can typically only afford to partner with one or two camera manufacturers, whereas in this case all that overhead is gone and you can just use the device directly.
It's a small market, but if you're in it, this kind of project is a great development.
For computer vision cars-vehicle automatic driving. We want to preprocess data in real time with the information provided by sensors(vehicle inertial sensors, tacometers and stuff). What is called "Sensor fusion", something very similar to what humans(or any animal) does.
We already use commercial cameras, but the ability to integrate directly our hardware(our own FPGAS, in the future ASICs) with cameras low level is simply very tempting. In software we use Linux for the same reason, there is no way you could integrate that much with proprietary software.
We need real raw data. Not raw data already preprocessed by the manufacturer, and total control over it, specially lighting. A vehicle is moving and if the preprocessor could not handle the change of lighting entering-exiting a tunnel fast enough on a bright day people could die. You need total control and stability and repetitivity.
Without this we will have to design all by ourselves, which is very expensive.
It looks like they are focusing on cinema so we are not sure about this, but it could be a very interesting possibility to explore.