A team from Ankara University conducted an online survey among 147 Turkish medical oncologists, evaluating their exposure to AI tools (notably LLMs), self-assessed knowledge, and ethical perceptions. Despite 77.5% reporting AI use, only 9.5% had formal training. Respondents advocate for structured education programs, robust legal frameworks, and patient consent to guide responsible AI integration into clinical oncology.

Key points

  • Surveyed 147 Turkish oncologists: 77.5% report using AI tools like ChatGPT; only 9.5% received formal training.
  • Over 86% self-assess limited knowledge in machine learning and deep learning; 47.6% report no familiarity with LLMs.
  • 79.6% find current legal regulations inadequate, calling for ethical audits, informed consent, and shared liability frameworks.

Why it matters: This survey highlights critical training and regulatory gaps to safely integrate AI into oncology practice.

Q&A

  • What is a large language model (LLM)?
  • Why is formal AI training important for oncologists?
  • What ethical concerns arise from using AI in patient management?
  • How could shared liability work for AI-driven errors?
Copy link
Facebook X LinkedIn WhatsApp
Share post via...


Read full article
Turkish medical oncologists' perspectives on integrating artificial intelligence: knowledge, attitudes, and ethical considerations