Deep learning algorithms are being deployed across several applications (recommender systems, self driving cars, face recognition, conversational bots) and potentially impact billions of people. However, testing of deep learning algorithms has remained limited to empirical validation of predictions on a holdout set (using metrics like test accuracy, predicion, recall etc.). We argue that this "predict well on data" specification is insufficient to model real-world constraints on deep learning algorithms (robustness, fairness, privacy, explainability etc.) and that satisfying these requires a specification-driven approach to deep learning. We develop formal verification algorithms to certify input-output properties of deep learning algorithms and validate them on several practical neural network classifiers. Further, we show that by folding simple verification algorithms into the training loop, we are able to train models that are verifiably robust to adversarial attacks with a relatively small overhead in training time.