1- You can start by creating a framework with a parallel interface on the side that includes the following:
- the capturing component,
- the sensor interface component,
- the rendering component,
- the tracking component
- the Webflow SDK interface.
The interface provides interaction between the application and the other 4 modular components.
2- You will need to create a suite “Webflow Suite” that include:
- Webflow SDK Deployment License,
- Cloud License,
- Webflow Creator,
- Webflow Continuous Visual Search (CVS)
- ?GB(as much you can offer to your users) of Cloud Space.
3- User should be able to import .obj, .fbx, and .md2 file formats to the Webflow Creator.
As mentioned earlier you need to create a Continuous Visual Search (CVS) is a cloud-based image matching system that allows you to create and easily manage a database of up to one million patterns and then quickly match them with their corresponding A-R content.
Due to the technical limitations of end users’ devices, A-R scenarios with more than 150 tracking images tend to encounter performance issues. The CVS circumvents these limitations by offloading the workload to the cloud. Your CVS should also allow for simplified content management.
Again to be one step further, your CVS should not just be only available for 2D tracking patterns but for 3D too q:
5- I would say maybe start by developing a community/forums so everyone can start sharing their ideas and expertise regarding this technology before creating this platform, but at the same time it needs a lot of internal brainstorming because I believe that the user doesn’t know what they want and you should provide them with new things in order for them to want them ((:
I hope this helps, I will be tackling the VR in a few so people won’t get bored reading this long discussion q: