PT / EN / FR

Ethical and reliable artificial intelligence

Due diligence of an AI project:

WE ARE INDEPENDENT AUDITORS

We help our clients to assess and reduce the risks related to the development and use of artificial intelligence. With confidentiality, we conduct analyses of artificial intelligence from ethical, technical, and legal perspectives.

01

ARTIFICIAL INTELLIGENCE SYSTEM RISK RATING

For documentation and accountability purposes.

02

ALGORITHMIC IMPACT ASSESSMENT

For documentation and accountability purposes.

03

TRANSPARENCY NOTICE

Clear and adequate information for consumers about the artificial intelligence system and the organization’s AI governance measures.

04

AI GOVERNANCE POLICY

Adherence to global AI standards regarding legislation, ethical principles and best practices.

05

ITERATIVE EVALUATION

Periodic evaluation that aims at the constant improvement of artificial intelligence systems.

06

ETHICAL ADVICE

Project support. Feedback on specific ethical challenges. Workshops and discussions. A stakeholder perspective on sensitive projects.

André Gualtieri

Ph.D.in Philosophy of Law from PUC-SP. Master in Philosophy of Law from USP. Artificial intelligence ethicist. Lawyer. CEO and Founder of Technoethics – School of Ethics of Technology. Invited professor at Mackenzie University. Researcher and author of publications on ethics, law, and artificial intelligence. Member of ForHumanity, an international community for AI reliability and autonomous systems. Research coordinator of the Ethics4AI Group of Mackenzie University in conjunction with the Brazilian Institute of Education, Development, and Research (IDP).

When one thinks about the potential that a new technology has to be accepted by users, one of the important factors is creepines, that is, the degree of discomfort or estrangement that a certain technology causes in people. The higher the level of creepiness of a technology, the lower its acceptance tends to be.

Andréa Naccache

Ph.D. in Philosophy and General Theory of Law and postdoctoral researcher in Civil Law at the University of São Paulo. She has an MBA in Finance from FGV. As a lawyer, psychologist and clinical psychoanalyst, she works among these specialties in the promotion of conditions of innovation in companies, for the arts and sciences. Her book Brazilian Creativity was a finalist for the Jabuti award and was selected by the Museu da Casa Brasileira. Her doctoral research on the relationship between algorithmic randomness and justice brought her invitations to discussions at Stanford for five consecutive years.

“[…] Law should not conceive of AI machines as technical-scientific tools for public and private endeavors (much less as potential projections of human intelligence and intentionality), but rather as networks of human relationships, akin to large plurilateral pacts in which the proposal, conduct, and even good faith of the parties are personally identified and observed.”

Felipe S. Abrahão

Researcher in the areas of theory of computation, information theory, mathematical logic, complex systems science, and epistemology. He is currently a postdoc researcher at the Centre for Logic, Epistemology and the History of Science (CLE), part of the University of Campinas (UNICAMP), São Paulo, Brazil, with a fellowship funding by the São Paulo Research Foundation (FAPESP). In addition to working as an associate researcher at Algora Auditing, he is also an associate researcher at the: Data Extreme Lab (DEXL), part of the National Laboratory for Scientific Computing (LNCC); Algorithmic Nature Group, LABORES for the Natural and Digital Sciences, Paris, France; AUTOMACOIN foundation. He holds a bachelor’s degree in Mathematics from the Federal University of Rio de Janeiro (UFRJ). He holds a master’s and doctorate in the interdisciplinary graduate program in History of Sciences and Techniques and Epistemology (HCTE), part of the Center for Mathematical and Natural Sciences (CCMN) of UFRJ, with scholarships from CAPES and CNPq, respectively.

“[…] there will be a critical stage in which […] the networked behavior will be unpredictable in precise terms of an increasing amount of bits that are incompressible by any recursive/computable procedure based on the chosen framework, if the observer only a priori knows the behavior of the nodes when isolated.”

contact us

Write to us through the website or by email: atendimento@algora.digital 

Form




    Algora is a collab among Gualtieri & Naccache, a boutique law firm focused on artificial intelligence, the Outsite innovation program, and Felipe S. Abrahão.

    Rua Caçapava, 49 • Cjto 56
    Jardim Paulista • 01408-010