Earlier this week, I successfully implemented basic transformations of simple Unity Game Objects such as a cube through ARCore and Gesture Detection. However, as one of our primary goals is to extract the data and update the parameters back to the compiler that Wenzhong has developed, I began to do further testing in a playground environment simply to identify limitations of dimension abstraction.
As shown from the following video, it appears that although the transformations from the STL object is updated visually, Unity has been unable to accurately extract this data and reflect it on screen: The parameters of the chair, namely the length, width, and height, have been reset to 1, which is Unity's default values for its size. This is perhaps because, by default, Unity does not represent the dimensions of the game object, but instead, simplifies its representation by only displaying the scale of the object. As each object is initialized as 100% of its original size, Unity thereby sets its initial dimensions to 1 x 1 x 1. Unforunately, this is an inaccurate reflection of its real-world dimensions, and such parameters fail to describe furniture-specific parameters such as chair leg width, number of legs, etc. Thus, upon further discussion with Wenzhong and Professor Chen earlier this week, I have decided to focus on developing a method (e.g. a script) that can decompress the STL binary data and transform with the original STL file.
Samples of pre-existing methods, written by other Unity developers, can be found on the following github repository:
This shows a method of decompressing an STL binary file using a nested vector structure. However, this method does not detect the specific dimensions of the object itself, and it is unclear whether transformations to this object will also be updated in the final product. Thus, in the subsequent week or two, I will be focusing on improving this script to factor in dimension transformations and develop a working system to communicate such updates to the backend compiler.