Abstract | dc.description.abstract | We propose a methodology to characterize the image contents of a web segment, and we present an
analysis of the contents of a segment of the Chilean web (.CL domain). Our framework uses an efficient
web-crawling architecture, standard content-based analysis tools (to extract low-level features such as
color, shape and texture), and novel skin and face detection algorithms. In an automated process we start
by examining all websites within a domain (e.g., .cl websites), obtaining links to images, and downloading
a large number of the images (in all of our experiments approx. 383,000 images that correspond to about
35 billion pixels). Once the images are downloaded to a local server, our process automatically extracts
several low-level visual features (color, texture, shape, etc.). Using novel algorithms we perform skin and
face detection. The results of visual feature extraction, skin, and face detection are then used to
characterize the contents of a web segment. We tested our methodology on a segment of the Chilean web
(.cl), by automatically downloading and processing 183,000 images in 2003 and 200,000 images in 2004.
We present some statistics derived from both sets of images, which should be of use to anyone concerned
with the image content of the web in Chile. Our study is the first one to use content-based tools to
determine the image contents of a given web segment. | es_CL |