Due to providing superior performances on typical machine learning problems, Deep Neural Networks (DNNs) have drawn a lot of interest recently. But, DNNs are highly computational and memory intensive which consume huge power/operation. Benefiting from high precision at acceptable hardware cost on these difficult problems is a challenge. To tackle this challenge, Ternary Neural Networks (TNN) have been proposed, which can provide accurate results close to the state of the art using floating point operations. Implementation of a TNN on FPGA is beneficial due o decreasing the cost of Multiply-Accumulator (MAC) units leading to increase energy efficiency. The main aim of this proposal is to design and implement an architecture for TNNs on FPGA.