Skip to content
This repository was archived by the owner on Jul 26, 2020. It is now read-only.

Conversation

Adimote
Copy link
Member

@Adimote Adimote commented Feb 18, 2018

This PR is based off of #29 as it uses some images from the fixed data set in testing. The first commit you should look at is 5bebf1f .

  • Added a new vision system using the unofficial python3 opencv bindings.
  • Removed the old ML-based vision system (good riddance)
  • Use distance_model as the new parameter for the camera model xml file generated by opencv
  • As the old calibration files are removed too, the default marker size (the size of a marker if its id isn't defined for the game) is now defined in game_specific.py
  • Update all distance-verifying tests to accept a certain error amount (within 15% of the total distance)

@Adimote Adimote changed the title Implement new vision system [WIP] Implement new vision system Feb 18, 2018
@Adimote
Copy link
Member Author

Adimote commented Feb 18, 2018

Just trying to get tests to pass before this is ready for review.

marker_id = apriltag_detection.id

# *************************************************************************
# NOTE: IF YOU CHANGE THIS, THEN CHANGE ROBOT-API camera.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have changed this file, however robot-api hasn't been changed (ideally by design). Is this note necessary anymore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It happens to be that the changes I made here doesn't affect robot-api's version. If I were to add a new value (i.e. orientation), I'd need to add it in robot-api, so I think we still need have a need for the message

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that that's strictly true, and certainly nowhere near as dramatic as this comment appears.

Adding a value here makes it possible to also expose that value in robot-api. Removing a value here is (hopefully obviously) a breaking change, so we'd need to modify the consumers of the library anyway.

I'd be in favour of reducing the shoutyness of this comment. However, from what I can see the comment itself doesn't change here, so that would be a separate PR anyway.


distance_model = self.camera.distance_model
camera_model = self.camera.camera_model
detections = list(self.apriltag_detector.detect_tags(img))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you casting this to a list, to then just iterate over it? I think keeping it as-is inside the list comprehension is much nicer

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, this was because I iterated over it twice at one point when experimenting, fixed in 5239714

Copy link
Contributor

@PeterJCLaw PeterJCLaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realise this is work-in progress, so some of the internal variances might be things already on your list of TODOs; hopefully this is useful input anyway.

rot_y=math.atan(-0.1), # or -5.710593137499643 degrees
dist=(0.01 + 1)**0.5, # or 1.004987562112089
rot_y=math.atan(-0.1),
dist=(0.1 + 1) ** 0.5,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the Cartesian position hasn't changed, then this too should be unchanged.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops that's a miss-calculation, forgot these values are squared. I'll make it more explicit.

polar_x=1.6704649792860642,
polar_y=1.5707963267948966,
dist=1.004987562112089,
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we removing this? While it's deprecated, it's still supported. We therefore still need to actually have it so should be validating that it outputs the same values.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re-added in df9df93

# Rotation tolerance is independent of distance
ROTATION_TOLERANCE = 0.075

CALIBRATIONS = Path(__file__).parent.parent / 'calibrations' / 'tecknet_25cm'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO it would be much better if we could have this still use the c270 samples (I realise that means also calibrating a c270) since we have confidence that the c270 model was correct. That would therefore make it much easier to have confidence that the new calibration mechanism is outputting consistent data.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should calibrate a c270 just so we don't need to change these tests. Calibrating has to be done manually using the interactive opencv script, and is thus more effort than it's worth.

rot_y=math.atan(-0.1), # or -5.710593137499643 degrees
dist=(0.01 + 1)**0.5, # or 1.004987562112089
rot_y=math.atan(-0.1),
dist=(0.1 + 1) ** 0.5,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you removing the comments here? They were present intentionally to help:

  • humans reason about whether values are correct
  • make it easier to understand the test failures (since the 'expected' values will be the calculated commented value, rather than the expression)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re-added them in 5125186



def assertWithinTolerance(val, expected, tolerance, error_msg=""):
assert (expected - tolerance) < val < (expected + tolerance), error_msg
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've not used it, but you might enjoy pytest.approx

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in f1e0154

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Adimote Out of curiosity: what are the error messages like from pytest.approx? Are they useful in helping understand the failure?

Copy link
Member Author

@Adimote Adimote Feb 24, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, they're actually as detailed as not using approx

return text


def get_calibration(file_name: Path) -> Dict:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will (should) be failing type checking -- please can we specify the details of this Dict. If they're too complex, consider using a typing.NamedTuple instead.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed a while ago (CBA to find commit hash)

values = _get_text(element.find('data'))
if data_type == 'd': # doubles
data = np.reshape(np.array([float(v) for v in values]),
(rows, cols))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the benefit of going into numpy here? If there's precision that we're aiming for (numpy's float64 is more precise than float is IIRC), then we should be using float64 to parse the string. Otherwise IMO we should stick to Python types here (not least as it makes the type signatures much easer to reason about).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in 50d2a45

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and a13fb5d

import numpy as np


def _get_text(element: etree.Element):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't really return "text"; is there a better name for it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in 304abe6



@functools.lru_cache()
def load_camera_calibrations(file_name: Path) -> Tuple[np.ndarray, np.ndarray]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that typing of numpy types doesn't actually work -- they're treated as Any, so if we can avoid using their array types and instead use built-in types that might be better.


camera_model = self.camera.camera_model
detections = list(self.apriltag_detector.detect_tags(img))
detections = self.apriltag_detector.detect_tags(img)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Jake's point was that we could leave this in the comprehension below entirely. This would both minimise the diff and also keep the clarity that the previous spelling had.

@PeterJCLaw
Copy link
Contributor

@Adimote thanks for those changes; I think this branch is now all good except the deployment plan & testing it on a Pi.

My wrapping of solvePnP now works and I've put up #36 with code.

As our usage of lxml was fairly lightweight, I had a quick look at whether we could remove that. It turns out that the etree stuff we were using is present in the standard library. I'm not sure whether it's actually the same code, but it works the same as far as we were using it. I've pushed #37 which moves us to the standard library there.

The combination of those allows us to ship this without any extra packages, which makes this much easier to ship! Once those PRs are merged into this one, this PR can be merged into master.

@PeterJCLaw
Copy link
Contributor

PeterJCLaw commented Feb 25, 2018

@Adimote I've just spotted that this still lists some of the now-redundant dependencies in the debian package control file. We should remove those.

Specifically we can remove python3-sklearn and python3-yaml.

@PeterJCLaw
Copy link
Contributor

@Adimote manual testing of this on a Pi mostly works, huzzah!

There are a couple of easily fixable breakages:

  • robotd will currently always request a calibration for a 'c270' model. We therefore need to not explode if that's asked for. We can do this either by faking the c270 calibration as being the C016 calibration (probably via a symlink) or by actually calibrating a c270. My preference would be for the latter as it's less confusing for anyone reading this repo and will allow developers to use them if that's what they have to hand.
    (I worked around this by editing my robotd/camera.py to ask for 'C016')
  • robotd expects that a Token will have a homography_matrix attribute (with a .tolist() method). We'll need to continue to expose that so as not to break robotd users (I don't see any reason to remove it really).
    (I worked around this by commenting out that line in my robotd/camera.py )

Other than that, this now works well and seems to be fairly accurate :)

If we use a calibration for a resolution other than that which it
is intended, we'll get nonsense values. Ensure that that doesn't
happen by requiring that `load_camera_calibrations` is passed the
image resolution which will be used and then validating that
against the value in the calibration file.
This merges 'new-vision-check-image-size' into new-vision. If we use
a calibration for a resolution other than that which it is intended,
we'll get nonsense values. Ensure that that doesn't happen by
requiring that `load_camera_calibrations` is passed the image
resolution which will be used and then validating that against the
value in the calibration file.
Copy link
Contributor

@PeterJCLaw PeterJCLaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit: Woo!

@Adimote Adimote merged commit 59e6819 into master Feb 25, 2018
@mxbi
Copy link

mxbi commented Feb 28, 2018

Now that this has been merged, is the new vision system in a state where we can load it up on the pi (after installing the libopencv/python headers) and try it out? :)

@PeterJCLaw
Copy link
Contributor

@mxbi happily there aren't any new dependencies for this change, so all that's needed is to build the latest package and install it.

Having done that you should find that the existing support in robot-api yields much more accurate results, though you may wish to wait for the next published update as that will include some other niceties too.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants