Abstract- Deep Neural Network based models has showed excellent ability in solving complex learning tasks in computer vision, speech recognition and natural language processing. Deep neural network learns data representation by solving specific learning task from the input data. Several optimization algorithms such as SGD, Momentum, Nesterov, RMSProp, and Adam were commonly used to minimize the loss function of deep neural networks model. At some point, the model may leak some information about the training data. To mitigate this leakage, differentially private optimization algorithms can be used to train deep neural networks models like DNN and CNN. It was shown that those differentially private optimization algorithms can perform better than differentially private SGD, yielding higher model accuracy and faster convergence.