Deep Neural Network (DNN) has already revealed its learning capabilities in runtime data processing for modern applications. However, DNNs are becoming more deep sophisticated models for gaining higher accuracy which require a remarkable computing capacity. The Problem will be more highlighted when we need to design distinct network architectures for different tasks. E.g. for many modern applications such as autonomous vehicles, we need different deep Convolutional Neural Networks (CNNs) to detect pedestrians, identify obstacles, reading traffic signs, and etc. However, there is a promising opportunity to tackle this challenge by benefiting from relevant tasks and sharing common features. For example a CNN designed for human age estimation, inherently produces training signals for gender classification task since in the initial layers we extract necessary information like lines or circles, then the complex objects will be learnt in the last layers. Thus, by integrating shared points among CNNs, we can decrease the total size of the task. In addition, the quality of the results will be improved due to increasing tasks generalization and diminishing dropout. This approach is called Multi-Task Learning (MTL). Mainly previous MTL works have tried to present robust fixed architectures for MTL, while inflexible networks cannot present acceptable results for modern deep learning benchmarks. In addition, finding shred points are risky and there is no general method. The main aim of this proposal is to propose an automatic framework trying to design a robust MTL of neural network using a meta-heuristic exploration algorithm to solve the neural architectural search problem in order to find an accurate multi tasks neural architecture.