Circular Image

S.N.R. Buijsman

17 records found

What variables should be used to get explanations (of AI systems) that are easily interpretable? The challenge to find the right degree of abstraction in explanations, also called the ‘variables problem’, has been actively discussed in the philosophy of science. The challenge is ...
ChatGPT is a powerful language model from OpenAI that is arguably able to comprehend and generate text. ChatGPT is expected to greatly impact society, research, and education. An essential step to understand ChatGPT’s expected impact is to study its domain-specific answering capa ...
Relevancy is a prevalent term in value alignment. We either need to keep track of the relevant moral reasons, we need to embed the relevant values, or we need to learn from the relevant behaviour. What relevancy entails in particular cases, however, is often ill-defined. The reas ...

Transparency for AI systems

A value-based approach

With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk ...
Machine learning techniques are driving — or soon will be driving — much of scientific research and discovery. Can they function as models similar to more traditional modeling techniques in scientific contexts? Or might they replace models altogether if they deliver sufficient pr ...
This chapter explores the principles and frameworks of human-centered artificial intelligence (AI), specifically focusing on user modeling, adaptation, and personalization. It introduces a four-dimensional framework comprising paradigms, actors, values, and levels of realization ...
Technologies have all kinds of impacts on the environment, on human behavior, on our society and on what we believe and value. But some technologies are not just impactful, they are also socially disruptive: they challenge existing institutions, social practices, beliefs and conc ...
Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from suc ...
Process reliabilist accounts claim that a belief is justified when it is the result of a reliable belief-forming process. Yet over what range of possible token processes is this reliability calculated? I argue against the idea that all possible token processes (in the actual worl ...
AI systems are increasingly being used to support human decision making. It is important that AI advice is followed appropriately. However, according to existing literature, users typically under-rely or over-rely on AI systems, and this leads to sub-optimal team performance. In ...
Why should we explain opaque algorithms? Here four papers are discussed that argue that, in fact, we don’t have to. Explainability, according to them, isn’t needed for trust in algorithms, nor is it needed for other goals we might have. I give a critical overview of these argumen ...
Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have d ...
Users of sociotechnical systems often have no way to independently verify whether the system output which they use to make decisions is correct; they are epistemically dependent on the system. We argue that this leads to problems when the system is wrong, namely to bad decisions ...
With recent advances in explainable artificial intelligence (XAI), researchers have started to pay attention to concept-level explanations, which explain model predictions with a high level of abstraction. However, such explanations may be difficult to digest for laypeople due to ...
Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of scie ...
In recent years philosophers have used results from cognitive science to formulate epistemologies of arithmetic (e.g. Giaquinto in J Philos 98(1):5–18, 2001). Such epistemologies have, however, been criticised, e.g. by Azzouni (Talking about nothing: numbers, hallucinations and f ...
Clarke and Beck argue that the ANS doesn't represent non-numerical magnitudes because of its second-order character. A sensory integration mechanism can explain this character as well, provided the dumbbell studies involve interference from systems that segment by objects such as ...