
		<paper>
			<loc>https://jjcit.org/paper/117</loc>
			<title>IMPROVED DEEP LEARNING ARCHITECTURE FOR DEPTH ESTIMATION FROM SINGLE IMAGE</title>
			<doi>10.5455/jjcit.71-1593368945</doi>
			<authors>Suhaila F. A. Abuowaida,Huah Yong Chan</authors>
			<keywords>Depth estimation,Single image,Deep learning,Encoder-decoder</keywords>
			<citation>24</citation>
			<views>7737</views>
			<downloads>2003</downloads>
			<received_date>28-Jun.-2020</received_date>
			<revised_date>  3-Aug.-2020 and 20-Sep.-2020</revised_date>
			<accepted_date>  27-Sep.-2020</accepted_date>
			<abstract>Numerous  benefits  of  depth  estimation from  the  single  image  field on  medicine,  robot  video  games and  3D 
reality applications have garnered attention in recent years. Closely related to the third dimension of depth, this 
operation  can  be  accomplished  using  human  vision,  though  considered  challenging  due  to  the  various  issues 
when  using  computer  vision.  The  differences  in  the  geometry,  the  texture  of  the  scene,  the  occlusion  scene 
boundaries  and  the  inherent  ambiguity  exist  because  of  the  minimal  information  that  could  be  gathered  from  a 
single  image.  This  paper,  therefore,  proposes  a  novel  depth  estimation  in  the  field  of  architecture,  which 
includes  the  stages  that  can  manage  depth  estimation  from  a  single  RGB  image.  An  encoder-decoder 
architecture  has been proposed, based on the  improvement yielded from  DenseNet that extracted the  map of an 
image using skip connection technique. This paper also takes on the reverse Huber loss function that essentially 
suits  our  architecture  hand  driven  by  the  value  distributions  that  are  commonly  present in depth  maps. 
Experimental  results  have  indicated  that  the  depth  estimation  architecture  that  employs  the  NYU  Depth  v2 
dataset has a better performance than the other state-of-the-art methods that tend to have fewer parameters and 
require fewer training time.</abstract>
		</paper>


