Visual motion interpretation is the core of many real-world AI applications, including self-driving cars, robotics, augmented reality, and human motion analysis. Classical computational approaches are based on computing correspondence, i.e., matching image points, in consecutive image frames which are then used to reconstruct scene models. However, in biology, we find systems with low computational power that do not compute correspondence but are very efficient in using visual motion. Their principles have not been translated to our computational approaches yet. I have explored the cue of visual motion from three different angles. First, neuromorphic event-based sensors which do not record image frames but temporal information about scene changes provide us with data in the form of point clouds in space time that approximate continuous motion. Exploiting the advantages of this data, we developed scene segmentation algorithms that function in the most challenging scenarios. Second, by changing the sequence of computations, and estimating 3D motion from robust filter output, we have developed optimization and machine learning algorithms that are robust and generalize better to new scenarios. Third, I show experiments on visual illusions that give an indication of the motion computations in the early visual processes in nature and point to directions for improving current motion computations.