Ghislain Fourny: Catalogue data in Spring Semester 2018

Award: The Golden Owl
Name Dr. Ghislain Fourny
Address
Dep. Informatik
ETH Zürich, STF H 311
Stampfenbachstrasse 114
8092 Zürich
SWITZERLAND
Telephone+41 44 632 31 55
E-mailgfourny@inf.ethz.ch
URLhttp://people.inf.ethz.ch/gfourny
DepartmentComputer Science
RelationshipLecturer

NumberTitleECTSHoursLecturers
252-0341-01LInformation Retrieval Information 4 credits2V + 1UG. Fourny
AbstractIntroduction to information retrieval with a focus on text documents and images.

Main topics comprise extraction of characteristic features from documents, index structures, retrieval models, search algorithms, benchmarking, and feedback mechanisms. Searching the web, images and XML collections demonstrate recent applications of information retrieval and their implementation.
ObjectiveIn depth understanding of how to model, index and query unstructured data (text), the vector space model, boolean queries, terms, posting lists, dealing with errors and imprecision.

Knowledge on how to make queries faster and how to make queries work on very large datasets. Knowledge on how to evaluate the quality of an information retrieval engine.

Knowledge about alternate models (structured data, probabilistic retrieval, language models) as well as basic search algorithms on the web such as Google's PageRank.
ContentTentative plan (subject to change). The lecture structure will follow the pedagogical approach of the book (see below).

The field of information retrieval also encompasses machine learning aspects. However, we will make a conscious effort to limit overlaps, and be complementary with, the Introduction to Machine Learning lecture.

1. Introduction

2. The basics of how to index and query unstructured data

3. Pre-processing the data prior to indexing: building the term vocabulary, posting lists

4. Dealing with spelling errors: tolerant retrieval

5. Scaling up to large datasets

6. How to improve performance by compressing the index

7. Ranking the results: scores and the vector space model

8. Evaluating the quality of information retrieval: relevance

9. Query expansion

10. Structured retrieval: when the data is not quite unstructured (XML or HTML)

11. Alternate approach: Probabilistic information retrieval

12. Alternate approach: Language models

13. Crawling the Web

14. Link analysis (PageRank)
LiteratureC. D. Manning, P. Raghavan, H. Schütze, Introduction to Information Retrieval, Cambridge University Press.
Prerequisites / NoticePrior knowledge in linear algebra, data structures and algorithms, and probability theory (at the Bachelor's level) is required.
252-3900-00LBig Data for Engineers Information
This course is not intended for Computer Science and Data Science students!
6 credits2V + 2U + 1AG. Fourny
AbstractThe key challenge of the information society is to turn data into information, information into knowledge, knowledge into value. This has become increasingly complex. Data comes in larger volumes, diverse shapes, from different sources. Data is more heterogeneous and less structured than forty years ago. Nevertheless, it still needs to be processed fast, with support for complex operations.
ObjectiveThis combination of requirements, together with the technologies that have emerged in order to address them, is typically referred to as "Big Data." This revolution has led to a completely new way to do business, e.g., develop new products and business models, but also to do science -- which is sometimes referred to as data-driven science or the "fourth paradigm".

Unfortunately, the quantity of data produced and available -- now in the Zettabyte range (that's 21 zeros) per year -- keeps growing faster than our ability to process it. Hence, new architectures and approaches for processing it were and are still needed. Harnessing them must involve a deep understanding of data not only in the large, but also in the small.

The field of databases evolves at a fast pace. In order to be prepared, to the extent possible, to the (r)evolutions that will take place in the next few decades, the emphasis of the lecture will be on the paradigms and core design ideas, while today's technologies will serve as supporting illustrations thereof.

After visiting this lecture, you should have gained an overview and understanding of the Big Data landscape, which is the basis on which one can make informed decisions, i.e., pick and orchestrate the relevant technologies together for addressing each business use case efficiently and consistently.
ContentThis course gives an overview of database technologies and of the most important database design principles that lay the foundations of the Big Data universe.

It targets specifically students with a scientific or Engineering, but not Computer Science, background.

The material is organized along three axes: data in the large, data in the small, data in the very small. A broad range of aspects is covered with a focus on how they fit all together in the big picture of the Big Data ecosystem.
- physical storage (HDFS, S3)
- logical storage (key-value stores, document stores, column stores, key-value stores, data warehouses)
- data formats and syntaxes (XML, JSON, CSV)
- data shapes and models (tables, trees, graphs)
- an overview of programming languages with a focus on their type systems (SQL, XQuery)
- the most important query paradigms (selection, projection, joining, grouping, ordering, windowing)
- paradigms for parallel processing (MapReduce) and technologies (Hadoop, Spark)
- optimization techniques (functional and declarative paradigms, query plans, rewrites, indexing)
- applications.

Large scale analytics and machine learning are outside of the scope of this course.
LiteraturePapers from scientific conferences and journals. References will be given as part of the course material during the semester.
Prerequisites / NoticeThis course is not intended for Computer Science and Data Science students. Computer Science and Data Science students interested in Big Data MUST attend the Master's level Big Data lecture, offered in Fall.

Requirements: programming knowledge (Java, C++, Python, PHP, ...) as well as basic knowledge on databases (SQL). If you have already built your own website with a backend SQL database, this is perfect.