• Head camera vision processing for calibrating position

    • Maybe its impossible to make a perfectly square gantry. Perhaps its possible but the idea behind developing this software is to work with different types of gantries and its probably very possible that some have been assembled more precisely then others. So the purpose of this solution is to develop a way to work with gantrys where the X and Y axis are not at a perfect 90 degree angle. In addition to the lack of squareness another issue is getting the step width perfectly calibrated. Its not so straightforward to do this. At least in my experience you can calculate how many steps you need in theory to hit your steps per millimeter ratio but then when you test this its not perfect. So then you try to incrementally change the settings until you hit your target ratio but it is a tedious process especially when trying to repeat the process. I guess you can get close enough but it is not perfect when doing this.

      Linear encoders are very useful for calibrating instruments but they do add to the cost and they are sort of a drag to work with when programming. Reprap 3D printers do not use them and the intent with this instrument is to try to work as close to how these printers work. So with this said the best way definately is to use linear encoders to resolve this but this addresses the steps per millimeter issue but it still seems tedious to resolve the 90 degree squareness issue that you have. Anyway then you run into the multiple instrument thing again ... not all instruments are going to have linear encoders and even the same type of linear encoders.

      The Gridding page iot_griddingpage.png is a tool that was developed to resolve these issues using a head camera to do this instead. A head camera can measure the distance between 2 points and use this as a way to calculate how many steps you need to hit these targets. This would resolve the steps per millimeter issue since you technically probably could work then with not perfectly calibrating this in your motion controller firmware. Also this resolves the shift since then you know how much you need to compensate for this. The objective behind this tool is enable the user to use images collected by a special video microscope using an imageprocessing algorithm to measure the positions of targets and to use this information to generate coordinates of targets. This is cooler anyway since then you can use the camera to do other stuff too like reading assays or other types of data collection.

      A image from the video microscope can be opened in this page and displayed through an image processing tool that can be used for feature detection. There is a form on this page that is used for adjusting the image processing algorithm. Currently this algorithm is used for spotfinding which for detecting round or small square objects.




      Once the spotfinding button is clicked, the spots are recognized and measured. The most important data is the X and Y position and we only pay attention to the first spot which is the top left corner. Potentially it is possible to measure multiple spots both the position and the pixel abundance. The graphic shows the some of the data that can be collected for the 4 spots detected:
      • PX - column center of spot in image pixels
      • PY - row center of spot in image pixels
      • CX - column center of spot that has been translated to the robot millimeters
      • CY - row center of spot that has been translated to the robot millimeters
      • Signal to noise - Signal of the spot compared to its localized background
      • Diameter - Spot diameter in micrometers


      So the nice thing about this tool is that it can be used both for calibrating positions and as a general purpose data collector since it can be used as a scanner.

      Essentially the objective of this tool is to output calibrated gcode files that enable spotting in a straight line meeting defined center to center spaces. In order to do this 2 images need to be analyzed to so that the center to center steps and the X and Y shift can be measured. The tool determines how many gcode coordinates you need to go from object to object. This means that even if the smoothieboard firmware is not perfectly calibrated this software still compensates for it in calculating the correct positioning coordinates. This tool also works with the Workplate page - iot_workplate_page.png where in this page the user selects what targets (like slides) are available for spotting or imaging. For these particular example we have 2 different types of targets Target A and Target B ththat have different array patterns.


      Once the targets are selected in the Gridding page, iot_griddingpage.png, you will see these targets displayed, both being labeled as Target A.



      After the calibration settings are set up, there are additional settings to adjust before generating gcode coordinates. One of which is to set up the dispensing tip to video microscope distance in both the X direction and Y direction. This is necessary if you want to do spotting since the mapping is done by microscope. If you want to spot there you need know this relative distance. Also there are input boxes for selecting if you want to do spotting or imaging, the default feedrate from XY positioning and the default feedrate for Z positioning. There is also an input box of the default Z position where the Z axis travels too when submitting: gearmandefaultz and backtoz.



      Now after selecting the targets to spot on, determining whether to use theoretical positioning or use the image processing tool to calibrate the center to center spacing and the XY shift, determining whether to use imaging or spotting where the X and Y offset from camera to dispenser is set, inputting the XY and Z feedrate and setting default Z axis level, now you can validate the accuracy of positioning, output ggcode data. The tool below which is also on the Gridding page shows these features.

      There is button to position the camera to the position where the first spot is recognized, the nice thing about doing this is that you can see if the center spot position (the green circle in the middle of the image) is over the first spot position. You can also go to the first spotting position to spot some water there then go see it with the video microscope. Also you can generate the gcode file and if this file is really long and you do not want to select it and copy it into a file which you later upload (which is possible) you can also save the data into a gcode file through the tool (later we will modify the python tool to work with these files).



      Here is an example of how the video targeting can be successfully implemented. The image processing works by synchronizing the position controller witth the image processing software so with this you can track if the positioning is accurate. Its not so clear but if you look closely at the center of the image there is a small green circle which represents where the instrument is positioned and this is directly above the target. This target is 100µm wide and its spaced at 1mm center to center spacing.


   © HTS Resources