Artificial intelligence (AI) and automated decision-making (ADM) increasingly support or replace human decision-makers in UK public administration. Examples include live facial‑recognition cameras used by police, automated calculations of social‑security benefits, predictive environmental models and algorithms recommending planning or licensing decisions.UK government guidance treats ADM broadly, covering both solely automated decisions and those assisting human judgment. Consequently, the legal principles described in this Note apply even when a human nominally makes the final decision but heavily relies on an AI‑generated score or recommendation.These systems promise efficiency but can have legal or similarly significant effects on individuals. The UK General Data Protection Regulation, Assimilated Regulation (EU) 2016/679 (UK GDPR), the Data Protection Act 2018 (DPA 2018), the Human Rights Act 1998 (HRA 1998) and the Equality Act 2010 (EqA 2010) set rules to ensure that automated decisions are lawful, fair and transparent. Public bodies are also bound by common‑law administrative duties, including the need for lawful authority, procedural fairness and rationality, and freedom of information law.This Practice Note summarises the law