Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in arXiv, 2023
State-of-the-art AI models largely lack an understanding of the cause-effect relationship that governs human understanding of the real world. Consequently, these models do not generalize to unseen data, often produce unfair results, and are difficult to interpret. This has led to efforts to improve the trustworthiness aspects of AI models. Recently, causal modeling and inference methods have emerged as powerful tools. This review aims to provide the reader with an overview of causal methods that have been developed to improve the trustworthiness of AI models. We hope that our contribution will motivate future research on causality-based solutions for trustworthy AI.
Recommended citation: Niloy Ganguly, Dren Fazlija, Maryam Badar, Marco Fisichella, Sandipan Sikdar, Johanna Schrader, Jonas Wallat, Koustav Rudra, Manolis Koubarakis, Gourab K. Patro, Wadhah Zai El Amri, and Wolfgang Nejdl (2023). "A Review of the Role of Causality in Developing Trustworthy AI Systems" arXiv:2302.06975. https://arxiv.org/abs/2302.06975
Published in AICPM 2023, 2023
Causal reasoning has garnered much attention in the AI research community, resulting in an influx of causality-based AI methods in recent years. We believe that this sudden rise of Causal AI has led to many publications that primarily evaluate their proposed algorithms in specifically designed experimental setups. Hence, comparisons between different causal methods, as well as existing state-of-the-art non-causal approaches, become increasingly more difficult. To make Causal AI more accessible and to facilitate comparisons to non-causal methods, we analyze the use of real-world datasets and existing causal inference tools within relevant publications. Furthermore, we support our hypothesis by outlining well-established tools for benchmarking different trustworthy aspects of AI models (interpretability, fairness, robustness, privacy, and safety) healthcare tools and how these systems are not prevalent in respective Causal AI publications.
Recommended citation: Dren Fazlija (2023). "Reporting on Real-World Datasets and Packages for Causal AI Research" In Artificial Intelligence, Causality and Personalised Medicine Symposium 2023.
Published in AAAI 2024 Spring Symposium on User-Aligned Assessment of Adaptive AI Systems, 2024
In the image domain, adversarial examples represent maliciously perturbed images that look benign to humans but greatly mislead state-of-the-art ML models. Previously, researchers ensured the imperceptibility of their altered data points by restricting perturbations via ℓp norms. However, recent publications claim that creating natural-looking adversarial examples without such restrictions is also possible. With much more freedom to instill malicious information into data, these unrestricted adversarial examples allow attackers to operate outside the expected threat models. However, surveying existing image-based methods, we noticed a lack of human evaluations of the proposed image modifications. To analyze the imperceptibility of these attacks, we propose SCOOTER – an evaluation framework for unrestricted image-based attacks containing guidelines, standardized questions, and a ready-to-use web app for annotating unrestricted adversarial images.​
Recommended citation: Dren Fazlija, Arkadij Orlov, Johanna Schrader, Monty-Maximilian Zühlke, Michael Rohs, Daniel Kudenko (2024). "How Real Is Real? A Human Evaluation Framework for Unrestricted Adversarial Examples" AAAI 2024 Spring Symposium on User-Aligned Assessment of Adaptive AI Systems. https://aair-lab.github.io/aia2024/papers/fazlija_aia24.pdf
Published in LREC-COLING 2024, 2024
The absence of explicitly tailored, accessible annotated datasets for educational purposes presents a notable obstacle for NLP tasks in languages with limited resources. This study initially explores the feasibility of using machine translation (MT) to convert an existing dataset into a Tigrinya dataset in SQuAD format. As a result, we present TIGQA, an expert-annotated dataset containing 2,685 question-answer pairs covering 122 diverse topics such as climate, water, and traffic. These pairs are from 537 context paragraphs in publicly accessible Tigrinya and Biology books. Through comprehensive analyses, we demonstrate that the TIGQA dataset requires skills beyond simple word matching, requiring both single-sentence and multiple-sentence inference abilities. We conduct experiments using state-of-the-art MRC methods, marking the first exploration of such models on TIGQA. Additionally, we estimate human performance on the dataset and juxtapose it with the results obtained from pre-trained models. The notable disparities between human performance and the best model performance underscore the potential for future enhancements to TIGQA through continued research. Our dataset is freely accessible via the provided link to encourage the research community to address the challenges in the Tigrinya MRC.
Recommended citation: Hailay Kidu Teklehaymanot, Dren Fazlija, Niloy Ganguly, Gourab K. Patro, and Wolfgang Nejdl (2024). "TIGQA: An Expert-Annotated Question-Answering Dataset in Tigrinya" In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. https://arxiv.org/abs/2404.17194
Published:
My colleague, Monty-Maximilian Zühlke, and I gave an interactive talk to a 9th-grade class about the impact of AI on the education system. We outlined some basic concepts behind LLMs and showcased examples in which ChatGPT performs well and some in which it performs surprisingly poorly. We discussed these examples with the students and examined the effects of working with AI systems.
Published:
This was just a poster presentation about our curated list of relevant tools for Causal Machine Learning, which we created as part of our survey on increasing trustworthiness via Causal ML. Check out the survey, the GitHub repository covering all relevant Causal ML tools, or the brief summary of our tools overview. → More information here
Published:
This is a talk I gave as part of our ongoing work on SCOOTER – a human assessment framework for unrestricted adversarial examples. This was a great opportunity to get high-quality feedback and talk and connect with great researchers. Shoutout to Rohan Chitnis, who helped me find a print shop near Stanford that provides same-day poster printing! (I lost the original poster on the way to my hotel… 😅). → More information here
Published:
Together with Daniel Kudenko and Monty-Maximilian Zühlke, we gave an interactive talk to a group of teachers about the impact of AI on the education system. We outlined some basic concepts behind LLMs and engaged in discussions by animating the participants to create teaching material with existing LLM-based systems. → More information here and here
Published:
Monty-Maximilian Zühlke and I gave an interactive talk to all three 10th-grade classes of the Gymnasium am Markt (or simply GamMa) in Achim, Lower Saxony about the impact of AI on the education system. To keep the students engaged for 90 minutes, we also asked them to create exams with ChatGPT and other systems. Overall, it was interesting to see the differences between the three classes and how modern a German school can be (all students had their own iPad, with which they could share their content live on a digital board at the front of the room). → More information here
Published:
Monty-Maximilian Zühlke and I gave an interactive talk to a 8th-grade class of a local IGS (i.e. a German comprehensive school) about the impact of AI on the education system. To keep the students engaged for 90 minutes, we also asked them to create exams with ChatGPT and other systems. Overall, it was interesting to see the differences between the responses given by IGS students and students from previous workshops (all of which where Gymnasium classes, i.e., students from the highest secondary school system in Germany).
Published:
Monty-Maximilian Zühlke and I engaged in a dialogue about AI’s impact on everyday life, a topic we’ve previously explored in workshops for school students. However, this workshop was a special opportunity as it allowed us to delve into the intersection of religion and AI, a topic that resonates with a diverse background of evangelic non-experts. As part of the 6th Ecumenical Trinity Reception in St. Nicolai in Hannover, we presented the impact and potential of modern AI technologies to this diverse audience. The discussions that followed were unique and insightful, and we are committed to expanding this workshop series to reach out to various audience demographics.
Published:
Together with Dr. Daniel Kudenko and Michael Hobusch, Monty-Maximilian Zühlke and I had the opportunity to present the impact of modern AI to vocational school students of the BBS Burgdorf. In our section of the workshop, Monty-Maximilian and I presented the strengths and weaknesses of modern LLM services like ChatGPT and Perplexity AI.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.