Statistical machine learning in scientific applications is often complicated by high-dimensional, continuous, and nonlinear interactions. Consequently, reasoning tasks pose computational challenges that prohibit the use of accurate statistical models in favor of low-fidelity approximations that fail to capture complex global phenomena. In this talk, I will discuss the benefits of approximate inference in accurate, but complex, models. I will show how variational inference can be flexibly adapted to perform efficient inference in complex models with examples such as articulated human pose estimation, protein structure prediction, and intracortical brain computer interfaces. I will further show how these methods can be extended to sequential decision making tasks that arise, for example, in optimal experiment design. Using gene regulatory network inference as an example, we will see how bounds on the mutual information can lead to efficient sequential optimization of experiment designs.