Overview of papers by my team recently accepted to CVPR 2018:(1) We revisit knowledge transfer for training object detectors on target classes from weakly supervised training images, helped by source classes with bounding-box annotations. We explore knowledge transfer functions ranging from class-specific to class-generic, demonstrate large improvements over weakly supervised baselines, and also carry out across-dataset transfer experiments.(2) We introduce Intelligent Annotation Dialogs: we train an agent to automatically choose a sequence of actions for a human annotator to produce a bounding box in a minimal amount of time. We introduce a model-based agent and an reinforcement learning agent and demonstrate that both agents can adapt to image difficulty, detector strength, and desired box quality.(3) Semantic classes can be either things (e.g. car) or stuff (e.g. grass). To understand stuff and things in context we enhance the complete COCO dataset (164K images) with stuff annotations and carry out a wide range of analysis.(4) We present a semantic part detection model that leverages various types of object information as context, all integrated in a neural network. This leads to considerably higher performance compared to using part appearance alone.