In the paper, we present a 3D shape modeling system based on Tsai-Shah shape from shading (SFS) algorithm. This SFS providers partial 3D shapes as depth maps of the object to be reconstructed. Our previously development projected polygon Representation Neural Network (PPRNN) performed the reconstruction process. This neural networks is able to successively refine the polygon vertices parameter of an initial 3D shape based on 2D images taken from multipleviews. The reconstruction is finalized by mapping the texture of object image to the 3D initial shape. It is known from static stereo analysis the even though multiple view images are use, obtaining 3D structure without considering of base-distance information, ie, focal separation between different camera positions, is impossible. Unless there is something else is known about the scene. Here we propose the use of shading features to extract the 3D depth maps by using a fast SFS algoritma, instead of rendering the object based on bare 2D images. A beginning result of reconstructing human (mannequin) head and face is presented. From our experiment, it was shown that using only 2D images would result a poor recosntruction. While using the depth-maps providers a smoother and more realistic 3D object.