Let me introduce myself

Hi there! My name is Thymios!

I am a fifth year PhD candidate (expecting to graduate by May 2023) in the Computer Science (CS) department at the University of Illinois Urbana-Champaign (UIUC) where I investigate machine learning for audio processing problems with my advisor Prof. Paris Smaragdis. During my PhD, I have also conducted research at Google AI, Mitsubishi Electric Research Laboratories (MERL) and Reality Labs at Meta (ex. FRL). Before that, I graduated with a diploma (Bachelor and MEng equivalent) in Electrical and Computer Engineering (ECE) from the National Technical University of Athens (NTUA).
My current research focuses on developing and utilizing efficient neural networks towards more generalizable audio and audio-visual source separation. I am particularly interested in unsupervised ways of learning directly from mixtures of sounds as well as federated learning algorithms which work under realistic non-IID settings. You might also be interested in the work done by my amazing labmates and friends at the UIUC: Jonah Casebeer, Zhepei Wang, and Krishna Subramani.
My research has been generously funded by Google through a Google PhD Fellowship, MERL, Meta Reality Labs, an NSF grant and a NIFA grant.
For more information, please check my academic CV.
Besides research, I love traveling, playing the guitar and playing soccer. During summer vacations, I spend most of my time playing beach rackets especially at the beautiful island of Santorini, where I also come from. If you also like etymology, Thymios derives from the Ancient Greek name “Euthýmios (Ευθύμιος)”, composed of two elements: “eû ‎(εὖ)” (well) plus “thūmós (θῡμός)” (soul, as the seat of emotion, feeling, life, desire, will, temper, passion), meaning “in good spirits, of good cheer, clear”.
What's up?

News Feed

Honors & Awards 07/29/2022
Paper Accepted! 07/24/2022 I am 😃 that we will present AudioScopeV2 at ECCV2022! If you want to learn about improved audio-visual attention models and calibration for on-screen sound separation check our paper [project page] [datasets]
Honors & Awards 06/29/2022 I got mentioned as a "highlighted reviewer for ICML 2022 (top 10%)!"
Honors & Awards 04/20/2022 I got mentioned as a "highlighted reviewer for ICLR 2022 (top 8.8%)!"
Presentation 04/20/2022 I am giving a virtual presention of RemixIT at ICASSP 2022! You can learn more here:
Paper Accepted! 01/27/2022 Continual self-training with bootstrapped remixing for speech enhancement at ICASSP 2022! Check our cool paper, [pdf]
Honors & Awards 10/14/2021 I was recognized as an outstanding reviewer (top 8%) for the NeurIPS 2021 conference!
Honors & Awards 09/29/2021 I was nominated as one of the four UIUC's representatives for the renowned world-wide "Google PhD Fellowship"!
Presentation 09/23/2021 I am giving a virtual presention of our paper "Separate but Together: Unsupervised Federated Learning for Speech Enhancement from Non-IID Data." at WASPAA 2021! You can check the video-presentation here! [paper] [code] [slides].
Status Change 09/20/2021 I am going to work as a student researcher for Mitsubishi Electric Research Laboratories (MERL) where I will be managed by Dr. Gordon Wichern and by Dr. Jonathan Le Roux.
Presentation 06/25/2021 We gave a talk with John Hershey at a CVPR 2021 workshop about self-supervised audio-visual sound separation!
Paper Accepted! 06/20/2021 Unsupervised federated learning for speech enhancement is going to be presented at WASPAA 2021! Check our cool paper, "Separate but Together: Unsupervised Federated Learning for Speech Enhancement from non-IID Data." here: [pdf] [code] [video]
Status Change 05/17/2021 I am going to spend my summer as a research intern for Facebook Reality Labs (FRL) where I will be supervised by Dr. Anurag Kumar !
Status Change 05/07/2021 After a long and fruitful collaboration with the sound separation team at Google AI Perception, it's time to say goodbye! I would like to express my sincere gratitude towards Scott Wisdom and John Hershey who were not simply my managers but also acted as my advisors and mentors! We did great things together, I will miss you guys!
Presentation 05/02/2021 I am giving a virtual presention of our paper "Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds." at ICLR 2021! You can check the video-presentation here! [paper] [slides].
Honors & Awards 04/28/2021 I was chosen as a finalist (top 3.5% of the applications) for the renowned "Facebook PhD Fellowship".
Paper Accepted! 01/28/2021 Our paper with Dimitris "Unified Gradient Reweighting for Model Biasing with Applications to Source Separation" has been accepted at ICASSP 2021! Please take a look at how you can exploit biases in NNs here: [pdf] [code]
Paper Accepted! 01/12/2021 AudioScope has been accepted at ICLR 2021! You can take a look at our paper: "Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds" [pdf]
Education Timeline

Education

Diploma (BS + Meng) in Electrical and Computer Engineering (ECE) at the
National Technical University of Athens (NTUA)
Athens, Greece | Oct 2012 - Jun 2018

  • Area: Computer Science
  • Highest Honors - top 2%
  • GPA: Cumulative 9.36/10.00 | Major 9.56/10.00
  • Thesis: "Manifold Learning and Nonlinear Recurrence Dynamics for Speech Emotion Recognition on Various Timescales" [pdf] [slides]
  • Advisor: Prof. Alexandros Potamianos

For more information, please check my academic CV.

Professional Timeline

Work Experience

Research Intern at Google AI
Cambridge, Massachusetts, USA | May 2022 - Aug 2022

Next-level of audio-visual sound source separation.

Graduate Teaching & Research Assistant at Audio Lab UIUC
Urbana-Champaign, Illinois, USA | Aug 2018 - May 2022

Research Intern at Facebook Reality Labs (FRL)
Redmond, Washington, USA | May 2021 - Aug 2021

Self-supervised speech enhancement at scale.

Student Researcher at Google AI
Urbana, Illinois, USA | Aug 2020 - May 2021

In-the-wild audio-visual universal sound source separation of on-screen sounds.

Research Intern at Google AI
Cambridge, Massachusetts, USA | May 2020 - Aug 2020

Unsupervised single-channel universal source separation using mixtures of mixtures. Purely unsupervised mixture invariant training obtains comparable results to fully supervised approaches!

Research Intern at Google AI
Cambridge, Massachusetts, USA | May 2019 - Aug 2019

Exploiting high-level semantic representations of sounds in order to boost the performance of universal sound source separation systems. The proposed architecture improved the state-of-the-art on sound separation when a variety of types of source signals might be active without the need of additional labels.

Machine Learning Engineer at Behavioral Signals
Los Angeles, California, USA | May 2017 - Jul 2018

Leading the machine learning infrastructure development. Building state-of-the-art models for speech emotion recognition and integrating them into product-level solutions. Automating the procedure of inference by implementing dynamic graph pipelines and optimizing feature extraction algorithms in Python.

Junior Researcher at ATHENA Research Center
Athens, Greece | May 2016 - Jul 2018

Building modelfs for real time speech emotion recognition and multimodal engagement detection (European project: BabyRobot).

Research Intern at SBA Research (IAESTE traineeship program)
Vienna, Austria | Jul 2016 - Aug 2016

End-2-end implementation (circuit level connections, BLE microprocessor programming, message encryption, Linux/Windows drivers and front-ends) of an automatic screen locker for increased computer security.

IT Advisory Intern at Ernst and Young
Athens, Greece | Jul 2015 - Oct 2015

Working at Piraeus Bank's database maintenance as an external partner. Performing financial data analysis and risk prediction.

Private Tutor
Athens, Greece | May 2013 - Jun 2017

Selected courses given: Maths, Physics, Differential Analysis, Algorithms.

For more information, please check my academic CV.

My contribution

Selected Research Papers

  • Tzinis, E., Adi Y., Ithapu, V. K., Xu B., Smaragdis, P., Kumar, A., "RemixIT: Continual self-training of speech enhancement models via bootstrapped remixing." To appear in Journal of Selected Topics in Signal Processing (JSTSP), 2022. [DOI] [pdf] [video].
  • Tzinis, E., Wisdom, Remez, T., and Hershey, J. R., "AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation." [pdf] [datasets] [video] [poster] [slides] [website].
  • Tzinis, E., Wichern G., Subramanian, A., Smaragdis, P., and Le Roux, J., "Heterogeneous target speech separation." In Proceedings of Interspeech, 2022. [DOI] [pdf], [code] [video] [slides] [audio-samples].
  • Tzinis, E., Adi Y., Ithapu, V. K., Xu B., Kumar, A., "Continual self-training with bootstrapped remixing for speech enhancement." In Procedings of International Conference of Acoustics, Speech and Signal Processing (ICASSP), 2022. [DOI] [pdf] [video] [poster] [slides].
  • Tzinis, E., Casebeer, J., Wang, Z., and Smaragdis, P., "Separate but Together: Unsupervised Federated Learning for Speech Enhancement from non-IID Data." In Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2021. [DOI] [pdf] [code] [video] [slides].
  • Tzinis, E., Wisdom, S., Jensen, A., Hershey, S., Remez, T., Ellis, D. P., and Hershey, J. R., "Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds." In Proceedings of International Conference on Learning Representations (ICLR), 2021. [DOI] [pdf] [website] [video] [slides].
  • Tzinis, E., Bralios, D., and Smaragdis, P., "Unified Gradient Reweighting for Model Biasing with Applications to Source Separation." In Proceedings of International Conference in Acoustics, Speech and Signal Processing (ICASSP), 2021. [DOI] [pdf] [code] [video].
  • Wisdom, S., Tzinis, E., Erdogan, H., Weiss, R. J., Wilson, K., and Hershey, J. R., "Unsupervised Sound Separation Using Mixtures of Mixtures." In Proceedings of Advances in Neural Information Processing Systems (NeurIPS), 2020. [DOI] [pdf] [video].
  • Tzinis, E.,Wang, Z., and Smaragdis, P., "Sudo rm -rf: Efficient Networks for Universal Audio Source Separation." In Proceedings of IEEE International Workshop on Machine Learning for Signal Processing (MLSP), 2020. [DOI] [pdf] [code] [video].
  • Tzinis, E., Venkataramani, S., Wang, Z., Subakan, Y. C., and Smaragdis, P., "Two-Step Sound Source Separation: Training on Learned Latent Targets." In Proceedings of International Conference in Acoustics, Speech and Signal Processing (ICASSP), 2020. [DOI] [pdf] [code] [video].
  • Tzinis, E., Wisdom, S., Hershey, J.R., Jansen, A. and Ellis, D.P., "Improving Universal Sound Separation Using Sound Classification." In Proceedings of International Conference in Acoustics, Speech and Signal Processing (ICASSP), 2020 [DOI] [pdf] [video].
  • Tzinis, E., Venkataramani, S. and Smaragdis, P., "Unsupervised deep clustering for source separation: direct learning from mixtures using spatial information." In Proceedings of International Conference in Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 81-85. [DOI] [pdf] [code]


Thesis

    Tzinis, E., ""Manifold Learning and Nonlinear Recurrence Dynamics for Speech Emotion Recognition on Various Timescales." Diploma Thesis at the National Technical University of Athens, Electrical and Computer Engineering (ECE) Department. [pdf] [slides]

A full and always updated list of my research papers is also available at my scholar profile.

Project Demos

Demos and Blog

Feel free to enjoy some fancy audio-visual samples of past and ongoing research work here!

Get in Touch

Contact Me

My office is located at the Siebel Center for Computer Science | No. 3332
201 N. Goodwin Ave. Urbana, IL, 61801, USA


“Γυμνοί ήλθομεν οι πάντες, γυμνοί και απελευσόμεθα.”
Αίσωπος, 620-560 π.Χ.
“We were all born naked and this is how we are going to die.”
Asopus, 620-560 BCE

Efthymios Tzinis © 2022

Congrats, you are the counter free-th visitor!