Recently, online optimization methods have been leveraged to develop the online nonstochastic control framework which is capable of learning online gradient perturbation controllers in the presence of nonstochastic adversarial disturbances. Interestingly, using online optimization for adapting controllers in the presence of unknown disturbances is not a completely new idea, and a similar algorithmic framework called Retrospective Cost Adaptive Control (RCAC) has already appeared in the controls literature in 2000s. In this paper, we present the connections between online nonstochastic control and RCAC, and discuss the different strengths of both approaches: i.e., RCAC is able to stabilize unknown unstable plants via the use of target model, while online nonstochastic control enjoys provably near optimal regret bounds given a stabilizing policy a priori. We further propose an integration of these two approaches. We hope that our insights will help the development of new algorithms that complement the two approaches.