A Model for Evaluating Algorithmic Systems Accountability
Algorithmic systems make decisions that have a great impact in our lives. As our dependency on them is growing so does the need for transparency and holding them accountable. This paper presents a model for evaluating how transparent these systems are by focusing on their algorithmic part as well as the maturity of the organizations that utilize them. We applied this model on a classification algorithm created and utilized by a large financial institution. The results of our analysis indicated that the organization was only partially in control of their algorithm and they lacked the necessary benchmark to interpret the deducted results and assess the validity of its inferencing.
NurtureToken New!

Token crowdsale for this paper ends in

Buy Nurture Tokens

Author

Are you an author of this paper? Check the Twitter handle we have for you is correct.

Yiannis Kanellopoulos (add twitter)
Ask The Authors

Ask the authors of this paper a question or leave a comment.

Read it. Rate it.
#1. Which part of the paper did you read?

#2. The paper contains new data or analyses that is openly accessible?
#3. The conclusion is supported by the data and analyses?
#4. The conclusion is of scientific interest?
#5. The result is likely to lead to future research?

Github
User:
None (add)
Repo:
None (add)
Stargazers:
0
Forks:
0
Open Issues:
0
Network:
0
Subscribers:
0
Language:
None
Youtube
Link:
None (add)
Views:
0
Likes:
0
Dislikes:
0
Favorites:
0
Comments:
0
Other
Sample Sizes (N=):
Inserted:
Words Total:
Words Unique:
Source:
Abstract:
None
07/18/18 09:12AM
2,462
1,063
Tweets
Nobody has tweeted about this paper.
Images
Related