BEGIN:VCALENDAR
VERSION:2.0
PRODID:icalendar-ruby
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
DTSTART:20190331T030000
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=3
TZNAME:CEST
END:DAYLIGHT
BEGIN:STANDARD
DTSTART:20191027T020000
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10
TZNAME:CET
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTAMP:20191117T025744Z
UID:5b4d8fb087028858162168@ist.ac.at
DTSTART:20190613T160000
DTEND:20190613T180000
DESCRIPTION:Speaker: Gitta Kutyniok\nhosted by Laszlo ErdÃ¶s\nAbstract: Des
pite the outstanding success of deep neural networks in real-world applica
tions\, most of the related research is empirically driven and a mathemati
cal foundation is almost completely missing. The main goal of a neural net
work is to approximate a function\, which for instance encodes a classific
ation task. Thus\, one theoretical approach to derive a fundamental unders
tanding of deep neural networks focusses on their approximation abilities.
In this talk we will provide an introduction into this research area. Afte
r a general overview of mathematics of deep neural networks\, we will disc
uss theoretical results which prove that not only do (memory-optimal) neur
al networks have as much approximation power as classical systems such as
wavelets or shearlets\, but they are also able to beat the curse of dimens
ionality. On the numerical side\, we will then show that superior performa
nce can typically be achieved by combining deep neural networks with class
ical approaches from approximation theory.
LOCATION:Big Seminar room Ground floor / Office Bldg West (I21.EG.101)\, IS
T Austria
ORGANIZER:boosthui@ist.ac.at
SUMMARY:The Approximation Power of Deep Neural Networks: Theory and Applica
tions
URL:https://talks-calendar.app.ist.ac.at/events/1806
END:VEVENT
END:VCALENDAR