CANLI
Yükleniyor Veriler getiriliyor…
/ Makaleler / Scopus Detay
Scopus YÖKSİS Eşleşti

Comparison of three chatbots as an assistant for problem-solving in clinical laboratory

Clinical Chemistry and Laboratory Medicine · Haziran 2024

YÖKSİS DOI Eşleşmesi Bulundu

Bu Scopus makalesi YÖKSİS veritabanında da kayıtlı. Aşağıda YÖKSİS verilerini görebilirsiniz.

YÖKSİS Kayıtları
Comparison of three chatbots as an assistant for problem-solving in clinical laboratory
Clinical Chemistry and Laboratory Medicine · 2024 SCI-Expanded
PROFESÖR SEDAT ABUŞOĞLU →
Comparison of three chatbots as an assistant for problem-solving in clinical laboratory
Clinical Chemistry and Laboratory Medicine · 2024 SCI-Expanded
PROFESÖR ALİ ÜNLÜ →
Comparison of three chatbots as an assistant for problem-solving in clinical laboratory
Walter de Gruyter GmbH · 2023 SCI-Expanded
PROFESÖR SEDAT ABUŞOĞLU →
Comparison of three chatbots as an assistant for problem-solving in clinical laboratory
Clinical Chemistry and Laboratory Medicine (CCLM) · 2024 SCI-Expanded
DOÇENT GÜLSÜM ABUŞOĞLU →
Comparison of three chatbots as an assistant for problem-solving in clinical laboratory
Clinical Chemistry and Laboratory Medicine (CCLM) · 2024 SCI-Expanded
PROFESÖR ALİ ÜNLÜ →

Makale Bilgileri

DergiClinical Chemistry and Laboratory Medicine
Yayın TarihiHaziran 2024
Cilt / Sayfa62 · 1362-1366
Özet Objectives: Data generation in clinical settings is ongoing and perpetually increasing. Artificial intelligence (AI) software may help detect data-related errors or facilitate process management. The aim of the present study was to test the extent to which the frequently encountered pre-analytical, analytical, and postanalytical errors in clinical laboratories, and likely clinical diagnoses can be detected through the use of a chatbot. Methods: A total of 20 case scenarios, 20 multiple-choice, and 20 direct questions related to errors observed in pre-analytical, analytical, and postanalytical processes were developed in English. Difficulty assessment was performed for the 60 questions. Responses by 4 chatbots to the questions were scored in a blinded manner by 3 independent laboratory experts for accuracy, usefulness, and completeness. Results: According to Chi-squared test, accuracy score of ChatGPT-3.5 (54.4%) was significantly lower than CopyAI (86.7%) (p=0.0269) and ChatGPT v4.0. (88.9%) (p=0.0168), respectively in cases. In direct questions, there was no significant difference between ChatGPT-3.5 (67.8%) and WriteSonic (69.4%), ChatGPT v4.0. (78.9%) and CopyAI (73.9%) (p=0.914, p=0.433 and p=0.675, respectively) accuracy scores. CopyAI (90.6%) presented significantly better performance compared to ChatGPT-3.5 (62.2%) (p=0.036) in multiple choice questions. Conclusions: These applications presented considerable performance to find out the cases and reply to questions. In the future, the use of AI applications is likely to increase in clinical settings if trained and validated by technical and medical experts within a structural framework.

Yazarlar (4)

1
Sedat Abusoglu
ORCID: 0000-0002-2984-0527
2
Muhittin Serdar
3
Ali Ünlü
ORCID: 0000-0002-9991-3939
4
Gulsum Abusoglu

Anahtar Kelimeler

artificial intelligence assistant clinical laboratory machine learning

Kurumlar

Acıbadem Mehmet Ali Aydınlar Üniversitesi
Istanbul Turkey
Selçuk Tip Fakültesi
Konya Turkey
Selçuk Üniversitesi
Selçuklu Turkey

Metrikler

4
Atıf
4
Yazar
4
Anahtar Kelime

Sistemimizdeki Yazarlar