AI, Research Ethics, and the Prospect of the “Research Robot”: A Foresight Study of Faculty Perceptions in Algerian Universities

الذكاء الاصطناعي وأخلاقيات البحث العلمي: مقاربة استشرافية لمخاطر ظهور «الباحث-الروبوت»

Intelligence artificielle et éthique de la recherche scientifique : approche prospective des risques de l’émergence du « chercheur-robot »

Djamila Benzidoun Brahimi Kherfia Djoudi

Djamila Benzidoun Brahimi Kherfia Djoudi, « AI, Research Ethics, and the Prospect of the “Research Robot”: A Foresight Study of Faculty Perceptions in Algerian Universities », Aleph [], 25 April 2026, 12 May 2026. URL : https://aleph.edinum.org/16438

This study examines the ethical tensions generated by the growing use of artificial intelligence in scientific research and analyses, in a foresight perspective, the representations associated with the emergence of the “research robot” as a possible substitute for the human researcher. Drawing on a descriptive survey design, the study relies on an electronic questionnaire administered to a purposive sample of 204 faculty members from Algerian universities between 24 February and 24 July 2025. The instrument explored three major dimensions: reasons for using AI, conditions for its responsible use, and concerns linked to its increasing integration into research practices. Data were processed with SPSS 25 using frequencies, percentages, means, standard deviations, Cronbach’s alpha, Spearman’s rho, and effect-size indicators as reported in the source dataset. The findings show widespread use of AI tools among respondents, especially for data processing, language revision, reference management, and analytical support. At the same time, the strongest concerns relate to academic plagiarism, erosion of ethical standards, lack of regulatory frameworks, data bias, and the symbolic displacement of the human researcher. Statistically significant correlations were reported between ethical violations and fears surrounding the “research robot,” between disciplinary field and level of AI use, and between perceived drawbacks and researchers’ concerns. The article argues that the issue is not whether AI should be rejected, but how its use can be governed through methodological vigilance, transparency, training, and institutionally grounded ethical regulation.

تتناول هذه الدراسة التوترات الأخلاقية التي يثيرها التوسع المتزايد في استخدام الذكاء الاصطناعي داخل البحث العلمي، وتحلل ــ في أفق استشرافي ــ التمثلات المرتبطة بظهور «الروبوت الباحث» بوصفه بديلاً محتملاً للباحث البشري. وتعتمد الدراسة تصميماً وصفياً قائماً على المسح، استند إلى استبيان إلكتروني وُجِّه إلى عينة قصدية قوامها 204 من أعضاء هيئة التدريس في الجامعات الجزائرية خلال الفترة الممتدة من 24 فبراير إلى 24 يوليو 2025. وقد شمل الاستبيان ثلاثة أبعاد رئيسة: دوافع استخدام الذكاء الاصطناعي، وشروط الاستخدام الرشيد، ومستويات القلق المرتبطة بتزايد حضوره في الممارسة البحثية. وعولجت البيانات ببرنامج SPSS 25 باستخدام التكرارات والنسب المئوية والمتوسطات والانحرافات المعيارية وألفا كرونباخ ومعاملات سبيرمان ومؤشرات حجم الأثر كما وردت في النتائج الأصلية. وتبين النتائج انتشاراً واسعاً لأدوات الذكاء الاصطناعي بين المبحوثين، ولا سيما في معالجة البيانات، والمراجعة اللغوية، وإدارة المراجع، والدعم التحليلي. غير أن أعلى مستويات القلق تعلقت بالسرقة العلمية، وإضعاف المعايير الأخلاقية، وغياب الأطر التنظيمية، وانحياز البيانات، والإزاحة الرمزية للباحث البشري. كما أظهرت النتائج وجود علاقات ذات دلالة إحصائية بين الانتهاكات الأخلاقية والخوف من «الروبوت الباحث»، وبين الحقل العلمي وكثافة الاستخدام، وبين إدراك المساوئ ومستوى القلق لدى الباحثين. وتخلص الدراسة إلى أن القضية ليست في رفض الذكاء الاصطناعي، بل في تنظيم استخدامه عبر يقظة منهجية، وشفافية صريحة، وتكوين متخصص، وحوكمة أخلاقية مؤسساتية.

La présente étude interroge les tensions éthiques suscitées par la montée en puissance de l’intelligence artificielle dans la recherche scientifique et analyse, dans une perspective prospective, les représentations associées à l’émergence du « robot chercheur » en tant que possible substitut du chercheur humain. Inscrite dans un devis descriptif par enquête, elle repose sur un questionnaire électronique administré à un échantillon raisonné de 204 enseignants-chercheurs issus d’universités algériennes entre le 24 février et le 24 juillet 2025. L’instrument a porté sur trois dimensions principales : les motifs d’usage de l’IA, les conditions de son emploi responsable et les inquiétudes liées à son intégration croissante dans les pratiques de recherche. Les données ont été traitées avec SPSS 25 au moyen de fréquences, pourcentages, moyennes, écarts-types, coefficients alpha de Cronbach, corrélations de Spearman et indicateurs de taille d’effet tels qu’ils figurent dans le jeu de résultats transmis. Les résultats montrent une diffusion très large des outils d’IA, en particulier pour le traitement des données, la révision linguistique, la gestion des références et l’assistance analytique. En parallèle, les inquiétudes les plus fortes portent sur le plagiat académique, l’affaiblissement des normes éthiques, l’absence de cadres régulateurs, les biais de données et le déplacement symbolique du chercheur humain. Des relations statistiquement significatives sont rapportées entre les violations éthiques et la crainte du « robot chercheur », entre le champ disciplinaire et l’intensité d’usage, ainsi qu’entre la perception des inconvénients et le niveau d’inquiétude. L’article soutient que l’enjeu n’est pas de rejeter l’IA, mais d’en organiser l’usage au moyen d’une vigilance méthodologique, d’exigences de transparence, d’une formation adaptée et d’une régulation éthique institutionnalisée.

Introduction

Artificial intelligence has moved from being a peripheral technical aid to becoming a structuring component of contemporary knowledge production. In scientific environments, it is now used to retrieve sources, classify corpora, process data, generate summaries, assist with writing, detect patterns, and support modelling. This acceleration has obvious pragmatic advantages; however, it also redistributes responsibility within the research process and raises a decisive epistemological question: at what point does assistance become substitution?

The rapid expansion of generative AI has intensified this question. The issue is no longer limited to automation in a narrow technical sense. It now concerns the credibility of scientific outputs, the transparency of procedures, the integrity of authorship, the control of bias, the traceability of data processing, and the preservation of a specifically human critical function within scholarly inquiry. In this context, the figure of the “research robot” is not merely rhetorical. It condenses a cluster of anxieties surrounding the possibility that digital systems could progressively occupy tasks historically associated with the researcher’s judgement, reflexivity, and accountability.

Recent scholarship has shown that AI can improve efficiency while simultaneously creating novel ethical risks. Resnik and Hosseini argue that the use of AI in research requires renewed guidance concerning transparency, bias control, disclosure, responsibility, and the status of AI-generated contributions. Universities are also beginning to formalise institutional policies for AI use, especially in higher education, where ethical guidance, authentic assessment, and literacy training have become central governance issues. At the same time, the broader literature on AI ethics continues to stress recurring principles such as transparency, fairness, accountability, and privacy, while identifying weak ethical literacy and vague operational norms as persistent implementation problems.

Within this broader international debate, the Algerian university context deserves sustained empirical attention. The local issue is not simply whether researchers use AI—because, increasingly, they do—but rather how they justify its use, which stages of research they entrust to it, what risks they perceive, and how those risks reshape their understanding of scientific legitimacy. This article addresses that gap through an empirical and foresight-oriented investigation of faculty perceptions in Algerian universities.

The contribution of the study is twofold. First, it documents the concrete uses, perceived benefits, and declared concerns associated with AI in scientific research among a diversified academic sample. Second, it connects these practical uses to a forward-looking ethical question: whether the cumulative normalisation of AI in research fosters a symbolic and methodological displacement of the human researcher by what the manuscript names the “research robot”.

Accordingly, the article asks three interrelated questions: How is AI currently mobilised in scientific research by university faculty in Algeria? Which ethical, methodological, and institutional risks are associated with this use? And to what extent do these risks nurture concern about the emergence of an “alternative researcher” grounded in automation? The hypotheses tested in the study are designed to answer these questions by linking patterns of AI use to research-ethics violations, disciplinary differences, faculty characteristics, perceived drawbacks, and requirements for responsible governance.

The study is therefore organised around a set of main and subsidiary hypotheses that connect current patterns of AI use to ethical risk, disciplinary variation, and governance needs.

1. Theoretical and conceptual framework

This study is framed at the intersection of research ethics, technology acceptance, and foresight analysis. The Technology Acceptance Model explains adoption through perceived usefulness and perceived ease of use, while the Diffusion of Innovations framework clarifies how compatibility, observability, and relative advantage shape the social circulation of a technology. These models are useful here because the respondents do not encounter AI as an abstract innovation; they assess it in relation to concrete research tasks, institutional constraints, and professional norms.

At the same time, a purely adoption-oriented framework would be insufficient. The problem is not reducible to the question of whether researchers accept AI. It also concerns how the adoption of AI transforms responsibility, authorship, and trust in scientific work. Research ethics therefore provides the normative horizon of the study. In this perspective, AI is not evaluated only for its efficiency but for the conditions under which it preserves—or weakens—the integrity of inquiry.

For the purposes of this article, artificial intelligence is understood as a set of computational techniques capable of performing tasks commonly associated with human cognitive activity, including pattern detection, language generation, classification, prediction, and decision support. Scientific research is understood as a rigorous and methodologically controlled activity oriented toward the production of valid knowledge. Research ethics refers to the principles, norms, and professional obligations that regulate data handling, intellectual ownership, methodological honesty, and accountability. The expression “research robot” designates, in this study, not a material humanoid agent but a socio-technical horizon in which increasingly automated systems perform a growing share of scholarly tasks to the point of symbolically challenging the centrality of the researcher.

The literature provides several relevant anchors. Resnik and Hosseini emphasise that AI use in research does not invalidate traditional research ethics, but it does create an urgent need for renewed guidance concerning disclosure, bias management, synthetic data, public accountability, and authorship. Jin and colleagues, working on policies across forty universities in six global regions, show that institutions are beginning to govern generative AI through ethical guidance, assessment reform, and literacy training, but that gaps remain regarding privacy, access, and long-term policy coherence. Khan and colleagues show that transparency, privacy, accountability, and fairness recur across the AI ethics literature, while vague principles and limited ethical operationalisation remain major obstacles.

The present study distinguishes itself from prior work in three ways. First, whereas several existing contributions are normative, bibliometric, or policy-oriented, this article is based on field data collected directly from university faculty. Second, it links practical usage patterns to explicit ethical anxieties rather than treating adoption as a purely functional matter. Third, it introduces a foresight dimension by interpreting present-day uses as indicators of a possible future reconfiguration of academic labour, authority, and legitimacy.

1.1 Research problem and hypotheses

Building on the framework outlined above, the study examines AI not only as a tool adopted for reasons of efficiency, but also as a source of ethical tension and professional uncertainty within scientific research. More specifically, it seeks to determine whether the increasing use of AI is associated with perceived ethical violations, and whether these perceived violations in turn reinforce concerns about the possible emergence of the “research robot” as a figure of partial substitution for the human researcher. On this basis, the study is organised around two main hypotheses, each of which includes subsidiary hypotheses.

  1. H1. There is a statistically significant relationship, at the 0.05 level, between the use of artificial intelligence and ethical violations of scientific research from the perspective of faculty members in Algerian universities.

  • H1a. There is a statistically significant relationship between AI use and the researchers’ scientific fields.

  • H1b. There is a statistically significant relationship between the perceived importance of AI use and the characteristics of faculty members.

  1. H2. There is a statistically significant relationship, at the 0.05 level, between ethical violations associated with AI and concerns about the emergence of the “research robot” from the perspective of faculty members in Algerian universities.

  • H2a. There is a statistically significant relationship between the drawbacks of AI use and the concerns generated by increased reliance on AI among faculty members.

  • H2b. There is a statistically significant relationship between concerns about AI use and the requirements identified for its optimal use among faculty members.

1.2 Study objectives

In line with these hypotheses, the study pursues a set of interrelated objectives. These objectives are not limited to describing the spread of AI tools in academic work; they also aim to clarify how such tools are perceived, what forms of risk they generate, and how their growing presence may reshape the ethical and professional conditions of scientific inquiry. The purpose is therefore both descriptive and analytical, with a prospective dimension that considers the long-term implications of current practices.

First, to identify the principal fields and stages of scientific research in which AI is currently used by the respondents. This objective makes it possible to determine whether AI remains concentrated in technical or support functions, or whether it is extending into more central stages of knowledge production.

Second, to analyse the perceived importance and practical utility of AI within research workflows. This involves examining the extent to which researchers consider AI to be useful for tasks such as information retrieval, data processing, reference management, writing support, modelling, and analytical visualisation.

Third, to identify the methodological, ethical, and institutional drawbacks associated with AI use in scientific research. Particular attention is given to issues such as plagiarism, data bias, weak transparency, the erosion of critical skills, and the absence of clear legal or ethical regulations.

Fourth, to measure the concerns linked to the increasing reliance on AI and to the possible emergence of the “research robot.” Here, the study seeks to understand whether researchers perceive AI merely as an auxiliary tool, or whether they associate it with a deeper transformation in the division of scholarly labour, responsibility, and intellectual authority.

Fifth, to propose a foresight-informed framework for the responsible and human-centred governance of AI in scientific research. This final objective is especially important, since the study does not aim merely to diagnose risks, but also to contribute to a more balanced model of integration in which technological innovation remains compatible with academic integrity and the central role of the human researcher.

2. Methodology

The study adopts a descriptive survey design with a foresight orientation. The descriptive component aims to document current uses, representations, and concerns, while the foresight component seeks to interpret these present tendencies as indicators of a possible future transformation of research practices. The choice of survey research is consistent with the objective of measuring declared behaviours, perceptions, and concerns among academic staff.

The target population consists of faculty members working in Algerian universities, university centres, and higher schools. Because a full census was not feasible, the study used a purposive sample contacted through professional e-mail. Data were collected between 24 February 2025 and 24 July 2025, and the final sample included 204 valid responses.

The questionnaire was structured around three major axes: reasons for using AI and the limits of its benefits; requirements for the responsible use of AI technologies; and researchers’ concerns regarding the growing place of AI in scientific work. The instrument was reviewed by specialists in media and communication sciences, and the manuscript reports that items were refined in light of their observations. Reliability was tested with Cronbach’s alpha and yielded acceptable to very high coefficients for the three thematic axes.

The data were processed with SPSS 25. The manuscript reports the use of frequencies, percentages, arithmetic means, standard deviations, Cronbach’s alpha coefficients, Spearman’s rho correlations, and effect-size indicators. Because the raw dataset was not available for independent recalculation at the editorial stage, the numerical values reproduced below correspond to the author-provided results after linguistic, structural, and typographic normalisation.

Methodologically, the study offers a useful empirical basis, but its interpretation should remain attentive to several limits inherent in purposive sampling, self-reported practices, and the absence of item-level annexes in the submitted version. These limits do not invalidate the findings, but they do delimit the scope of generalisation and reinforce the need for prudent interpretation.

2.1 Characteristics of the study sample

Table 1. Characteristics of the study sample

Characteristic

Category

Frequency

Percentage (%)

Gender

Male

100

49.0

Gender

Female

104

51.0

Gender

Total

204

100.0

Age group

Less than 30 years

11

5.4

Age group

30-39 years

73

35.8

Age group

40-49 years

98

48.0

Age group

50 years and above

22

10.8

Academic rank

PhD student

21

10.3

Academic rank

Assistant Professor

53

26.0

Academic rank

Lecturer

97

47.5

Academic rank

Professor

33

16.2

Scientific field

Medical sciences and pharmacy

10

4.9

Scientific field

Mathematics, computer science, and physics

33

16.2

Scientific field

Natural and biological sciences

33

16.2

Scientific field

Humanities and social sciences

44

21.6

Scientific field

Literature and foreign languages

21

10.3

Scientific field

Economics, commerce, and management sciences

42

20.6

Scientific field

Law and political sciences

21

10.3

Years of experience

Less than 5 years

24

11.8

Years of experience

5-9 years

64

31.4

Years of experience

10-15 years

52

25.5

Years of experience

More than 15 years

64

31.4

Years of experience

Total

204

100.0

The table above shows the characteristics of the study sample. Females represented 51%, while males represented 49%. Regarding age categories, the group aged 40–49 years ranked first with 48%, followed by the group aged 30–39 years with 35.8%. The age group 50 years and above came third with 10.8%, and the category under 30 years represented 5.4%.

For academic rank, the category “lecturer” ranked first with 47.5%, followed by “Assistant Professor” with 26%. The rank of “PhD Student” came third with 10.3%, while “professor” represented 16.2%, ranking last among the four categories.

Regarding scientific specialization, Humanities and Social Sciences ranked first with 21.6%, followed by Economics, Commerce, and Management Sciences with 20.6%. The fields of Mathematics, Computer Science, and Physics and Natural and Biological Sciences each represented 16.2%. Meanwhile, Law and Political Sciences and Literature and Foreign Languages each represented 10.3%, and Medical Sciences and Pharmacy represented 4.9%.

For years of experience, the category 5–9 years ranked first with 31.4%, followed by more than 15 years with the same percentage (31.4%). The category 10–15 years accounted for 25.5%, while less than 5 years ranked last with 11.8%.

2.2 Instrument validity and reliability

Table 2. Cronbach’s alpha coefficients for the questionnaire

Axis

Number of items

Cronbach's alpha

Axis 1

10

0.944

Axis 2

11

0.762

Axis 3

10

0.976

Overall score

-

0.894

To ensure the validity of the questionnaire tool, it was reviewed by a group of experts in the field of Media and Communication Sciences. The questionnaire items were refined, corrected, and clarified based on their suggestions. To verify the reliability of the tool, Cronbach’s Alpha test was used. The results obtained are presented in the following table:

The reported reliability levels indicate that the instrument was sufficiently stable for exploratory analysis of perceptions and declared practices.

2.3 Statistical processing and interpretive thresholds

According to the submitted manuscript, the data were analysed with SPSS 25 using the following procedures:

  • Calculating frequencies and percentages.

  • Cronbach's alpha coefficient.

  • Arithmetic mean and standard deviation.

  • Spearman's correlation coefficient.

  • Calculation of effect size: Omega-squared (ω²)

The weighted-mean interpretation grid reported in the manuscript is reproduced below for clarity.

Table 3. Interpretation of weighted averages

Response

Range

Interpretation

Strongly oppose

1.00-1.80

Very low

Oppose

1.81-2.60

Low

Neutral

2.61-3.40

Moderate

Agree

3.41-4.20

High

Strongly agree

4.21-5.00

Very high

3. Results

3.1 General use of AI and reasons for adoption

Table 4. Use of AI applications in scientific research

Response

Frequency

Percentage (%)

Yes

204

100.0

No

0

0.0

Total

204

100.0

The table above illustrates the use of artificial intelligence software and applications in scientific research by faculty members and researchers. The results showed that the entire study sample uses artificial intelligence in scientific research at a rate of 100%.

Table 5. Reasons for using AI tools and applications in scientific research

Item

Frequency

Percentage (%)

Rank

Data collection and analysis

188

29.1

1

Language proofreading and translation

161

24.9

2

Academic writing and editing

78

12.1

4

Reference management

78

12.1

5

Simulation and modelling

84

13.0

3

Predicting future trends

57

8.8

6

Total citations

646

100.0

-

The results in the table show that data collection and analysis ranks first with 29.1%, followed by language proofreading and translation in second place with 24.9%. Simulation and modeling ranks third with 13%, while academic editing and reference management share fourth place with 12.1%. The lowest percentage is related to the use of AI in predicting future trends, with 8.8%.

These results align with the principle of perceived usefulness from the Technology Acceptance Model, as well as the principles of relative advantage and compatibility from the Diffusion of Innovations Theory. This is largely due to the ease of integrating certain AI tools and their user-friendly nature.

The findings also reflect researchers’ awareness particularly among faculty members and researchers in Algerian universities of the value of AI applications in accessing, processing, and analyzing large datasets.

Table 6. Perceived contribution of AI to improving research quality

Response

Frequency

Percentage (%)

Yes

85

41.7

No

48

23.5

Maybe

71

34.8

Total

204

100.0

The results indicate that 41.7% of faculty members and researchers believe that using AI contributes to improving the quality of scientific research. This percentage reflects researchers’ awareness of the benefits of AI, particularly its role in speeding up data collection and improving the accuracy of results. This interpretation aligns with the principle of perceived usefulness. This pattern suggests that the perceived usefulness of AI remains a central driver of adoption. Meanwhile, 34.8% expressed hesitation or uncertainty about this issue, and 23.5% reject the idea of relying on AI in scientific research.

3.2 Tools used and fields benefitting most

Table 7. AI tools most commonly used by the respondents

Tool

Frequency

Percentage (%)

Rank

Zotero

52

7.0

6

Scite

153

20.6

2

Grammarly

107

14.4

5

Bard

125

16.8

4

ChatGPT

156

21.0

1

Other

150

20.2

3

Total citations

743

100.0

-

The results show the most commonly used AI tools among researchers. ChatGPT recorded the highest usage rate with 21.0%, followed closely by Scite with 20.6%, and then other tools with 20.2%. Bard came next with 16.8%, followed by Grammarly with 14.4%, while Zotero recorded the lowest usage rate at 7.0%. These findings reflect researchers’ tendency to favor generative and interactive text-based tools over traditional reference-management tools.

Table 8. Scientific fields that benefit most from AI tools

Field

Frequency

Percentage (%)

Rank

Natural, engineering, and physical sciences

16

7.8

5

Medical, health, and pharmaceutical sciences

40

19.6

2

Social and human sciences

23

11.3

4

Economics, marketing, and business administration

40

19.6

3

Cybersecurity and data science

73

35.8

1

Other

12

5.9

6

Total

204

100.0

-

The results show that cybersecurity and data science represent the fields that benefit the most from AI tools, with a rate of 35.8%. This is followed by the fields of medical, health, and pharmaceutical sciences and economics, marketing, and business administration, both with 19.6%. The social and human sciences ranked next with 11.3%, while the natural, engineering, and physical sciences recorded a lower rate of 7.8%. The category “Other” came last with 5.9%.

This distribution indicates that technical and data-driven fields represent the most suitable environment for adopting AI applications compared to theoretical disciplines.

The prominence of cybersecurity and data-oriented fields can be explained by the high perceived usefulness of AI in these areas, where the technology assists with large-scale processing, pattern detection, and prediction. The relative ease of integrating AI into already technical research environments likely reinforces adoption.

3.3 Perceived importance of AI in scientific research

Table 9. Perceived importance of AI in scientific research

Item

SA

A

N

D

SD

Mean

SD

Rank

Decision

Big data analysis

61

63

45

0

35

3.56

1.372

7

High

Accuracy of research data and results

32

137

0

35

0

3.81

0.901

6

High

Automation of research processes

17

61

45

81

0

3.07

1.015

10

Moderate

Efficient management of references

15

189

0

0

0

4.07

0.262

3

High

Fast access to data and information

72

98

17

17

0

4.10

0.873

2

High

Reducing human errors

53

59

31

52

9

3.47

1.245

8

High

Saving time and effort

84

58

43

9

10

3.97

1.116

4

High

Enhancing innovation, simulation, and modelling

45

55

29

56

19

3.25

1.321

9

Moderate

Advanced data processing, statistical testing, and visualisation

-

-

-

-

-

4.13

0.858

1

High

Machine learning and prediction

58

97

19

23

7

3.86

1.060

5

High

Overall mean

-

-

-

-

-

3.729

1.002

-

High

The highest reported mean concerns the item dealing with multiple options for data processing, advanced statistical testing, and visualisation (M = 4.13), followed by fast access to data and information (M = 4.10), efficient management of references and sources (M = 4.07), and saving time and effort (M = 3.97). Taken together, these scores show that respondents value AI above all for procedural acceleration and analytical support.

By contrast, automation of research processes records the lowest mean (M = 3.07), while enhancing innovation, simulation, and modelling remains comparatively less strongly endorsed (M = 3.25). Big-data analysis (M = 3.56) and reducing human errors (M = 3.47) occupy an intermediate position. The respondents thus appear more convinced by visible operational gains than by claims of full automation.

These findings suggest that researchers perceive the importance of AI primarily in terms of speed, efficiency, and the diversification of analytical options, rather than as a substitute for scholarly judgement or as an autonomous engine of innovation.

These results highlight the role of perceived usefulness as a major determinant of technology acceptance. Respondents value AI when it produces visible gains in efficiency, speed, and procedural support.

The aspects that offer clear relative advantages and observable practical benefits—such as speed and multiple analytical options—receive higher appreciation. By contrast, automation-heavy or more speculative uses appear less compatible with current research routines, which may explain their weaker uptake.

3.4 Requirements for responsible use

Table 10. Requirements for using AI in scientific research

Requirement

Frequency

Percentage (%)

Rank

Technical skills in using AI tools

67

32.8

2

Technological infrastructure and smart equipment

96

47.1

1

High-speed internet

14

6.9

4

A regulating legal framework

20

9.8

3

Ethical codes

7

3.4

5

Total

204

100.0

-

The findings indicate that the most important requirement for using artificial intelligence among the respondents is the availability of technological infrastructure and smart equipment (47.1%). This is followed by technical skills in dealing with AI tools (32.8%). The need for a regulating legal framework comes third (9.8%), while high-speed internet (6.9%) and ethical codes (3.4%) are cited less frequently. These results suggest that digital infrastructure and human technical capacity constitute the immediate foundations for the successful integration of artificial intelligence into scientific research. At the same time, the relatively low frequency attributed to legal and ethical frameworks should not be interpreted as insignificance; rather, it may indicate that respondents view them as institutional conditions that remain underdeveloped or externally administered.

From an interpretive perspective, the availability of infrastructure and digital equipment likely increases both perceived ease of use and perceived usefulness, thereby strengthening researchers’ intentions to adopt AI tools. Similarly, equipping researchers with technical skills enhances self-efficacy and supports a more critical, competent, and responsible use of these technologies. However, while technological and skill-based requirements address immediate operational needs, the absence of clear legal and ethical frameworks may undermine the long-term trustworthiness of AI-assisted research outputs.

In practical terms, the respondents’ answers point toward a governance agenda that must combine four interdependent dimensions: adequate technical infrastructure, sustained capacity building, legal regulation, and ethical framing. More specifically, this implies, first, strengthening technological infrastructure and providing intensive training for researchers in the use of AI tools; and second, developing clear legal and ethical frameworks capable of regulating AI use and ensuring the credibility and reliability of research outputs.

3.5 Research stages that benefit from AI

Table 11. Extent to which the stages of scientific research benefit from AI

Research stage

High benefit

Moderate benefit

No benefit

Mean

SD

Rank

Decision

Analysing the research gap and building the study topic

52

33

119

1.67

0.857

6

Very low

Searching for new research topics

47

0

157

1.46

0.844

7

Very low

Managing previous studies

0

71

133

1.35

0.478

8

Very low

Collecting data from different sources

0

47

157

1.23

0.422

10

Very low

Data filtering and reducing methodological errors

81

123

0

2.40

0.490

3

Low

Experiment simulation and re-modelling

128

52

24

2.51

0.698

2

Low

Conducting advanced analyses and visual representation of results

95

85

24

2.35

0.682

4

Low

Academic editing of scientific research

171

0

33

2.68

0.738

1

Moderate

Plagiarism detection

0

71

133

1.35

0.478

9

Very low

Scientific publishing

47

72

85

1.81

0.784

5

Low

The results show that the stages benefiting most from AI are those associated with operational support and applied analysis. Academic editing ranks first (M = 2.68; SD = 0.738), followed by experiment simulation and re-modelling (M = 2.51) and data filtering aimed at reducing methodological errors (M = 2.40). Conducting advanced analyses and visual representation of results also occupies an important place (M = 2.35).

By contrast, the perceived benefit of AI is markedly lower in the theoretical and exploratory moments of research. Searching for new research topics records a mean of 1.46, analysing the research gap a mean of 1.67, and both managing previous studies and plagiarism detection a mean of 1.35. Collecting data from different sources records the lowest mean in the table (M = 1.23).

This configuration indicates that respondents perceive AI as especially useful where the research process involves formatting, modelling, filtering, or procedural acceleration, but far less useful where the task requires conceptual originality, bibliographic positioning, or problem construction. In other words, AI is welcomed more readily as an assistant in execution than as a substitute in epistemic design.

This pattern is consistent with a cautious form of adoption: respondents accept AI where it supports labour-intensive operations, yet remain reluctant to delegate the more interpretive and problem-formulating stages of research.

The results therefore point toward a differentiated model of use in which AI is integrated into the research workflow without being granted equal legitimacy across all stages of knowledge production.

3.6 Drawbacks and barriers to adoption

Table 12. Disadvantages of using AI in scientific research

Item

SA

A

N

D

SD

Mean

SD

Rank

Decision

Limits creative thinking

68

72

5

8

51

3.48

1.583

6

High

Reduces analytical and critical skills

55

66

16

17

50

3.29

1.547

9

Moderate

Algorithms affect research quality and result credibility

58

65

13

16

52

3.30

1.574

8

Moderate

Weak skills in using AI software

34

112

0

41

17

3.51

1.222

4

High

Privacy issues

37

60

22

44

41

3.04

1.431

11

Moderate

Data inaccuracy and bias

25

127

0

28

24

3.50

1.218

5

High

Academic plagiarism

71

98

14

15

6

4.04

0.989

1

High

Institutionalization of the “researcher robot”

51

57

29

54

13

3.39

1.287

7

Moderate

Absence of legal regulations

61

63

45

0

35

3.56

1.372

3

High

Violation of ethical standards

32

137

0

35

0

3.81

0.901

2

High

Information security risks

17

61

45

81

0

3.07

1.015

10

Moderate

Overall mean

-

-

-

-

-

3.454

1.285

-

High

The results reveal that the most salient disadvantages of AI use in scientific research are academic plagiarism (M = 4.04), violation of ethical standards (M = 3.81), and the absence of legal regulations (M = 3.56). These values indicate that respondents identify the major risks of AI not only in technical malfunction, but in the weakening of the normative architecture that underpins scientific credibility.

This hierarchy is analytically important because it shifts the debate from mere tool performance to questions of accountability, traceability, authorship, and professional responsibility in AI-mediated knowledge production.

Privacy concerns (M = 3.04) and information security risks (M = 3.07) appear lower in the ranking, although they remain non-negligible. Meanwhile, weak skills in using AI software (M = 3.51) and data inaccuracy and bias (M = 3.50) point to a technical knowledge gap that calls for targeted training, methodological vigilance, and institutional guidance.

Table 13. Barriers to adopting AI software in scientific research

Obstacle

SA

A

N

D

SD

Mean

SD

Rank

Decision

Dominance of the “researcher robot”

23

179

0

0

2

4.08

0.442

4

High

Data bias

76

96

15

17

0

4.13

0.875

3

High

Inability of the legal framework to keep pace with AI developments

72

83

2

32

15

3.81

1.274

6

High

Digital gap

68

83

7

32

14

3.78

1.254

7

High

Weakness of traditional curricula and their inability to adapt to technological and contextual developments

81

98

5

14

5

4.16

0.948

2

High

Weak technological training and qualification

43

130

1

28

2

3.90

0.921

5

High

Weak financial and logistical support for research institutions

163

35

1

2

3

4.73

0.666

1

High

Lack of transparency

66

69

13

41

15

3.64

1.315

9

High

Misinformation and deepfakes

72

81

16

4

31

3.78

1.356

8

High

Scientific integrity

54

81

13

46

10

3.60

1.233

10

High

The main barriers to adopting artificial intelligence software in scientific research are structural and institutional. The strongest obstacle is weak financial and logistical support for research institutions (M = 4.73), followed by the weakness of traditional curricula and their inability to adapt to technological and contextual developments (M = 4.16), and data bias (M = 4.13). Dominance of the “researcher robot” also remains high among respondents’ concerns (M = 4.08).

These results suggest that the difficulties surrounding AI adoption extend beyond individual hesitation. They concern infrastructures, training systems, curricular modernisation, governance capacity, and the material conditions that allow researchers to use AI critically rather than passively. In this light, the problem is institutional as much as it is ethical.

Issues such as lack of transparency (M = 3.64) and scientific integrity (M = 3.60) remain important, while the inability of legal frameworks to keep pace with AI developments (M = 3.81) confirms that regulatory lag is itself perceived as a barrier to responsible adoption.

3.7 Hypothesis testing

The correlation results reported in the manuscript are synthesised below in a standardised format.

Table 14. Summary of hypothesis testing

Relationship tested

N

Spearman's rho

Sig.

Interpretation

Ethical violations ↔ concerns about the “research robot”

204

0.62

0.000

Strong, statistically significant

AI use ↔ scientific field

204

0.71

0.000

Very strong, statistically significant

Perceived importance of AI ↔ staff characteristics

204

0.54

0.003

Moderate positive, statistically significant

Drawbacks of AI use ↔ concerns

204

0.66

0.000

Strong, statistically significant

Concerns about AI use ↔ requirements for optimal use

204

0.59

0.001

Moderate-to-strong, statistically significant

The correlation results indicate a strong and statistically significant relationship between violations of research ethics and concerns about the “research robot” (rho = 0.62; p < 0.001). Substantively, this means that the more respondents associate AI-mediated research with plagiarism, weak traceability, manipulation, or erosion of norms, the more they fear a displacement of the human researcher by automated procedures.

This relationship should be read as a structured association between perceptions rather than as proof of a literal technological replacement. It nevertheless shows that ethical insecurity is one of the principal matrices through which the prospect of the “research robot” becomes thinkable in academic settings.

The descriptive profile of the sample helps to contextualise this finding. Mid-career and senior academics are strongly represented, and humanities and social-science respondents appear especially attentive to the normative and interpretive dimensions of research practice. This may help explain why the issue is framed not merely as a technical innovation, but as a challenge to authorship, responsibility, and epistemic legitimacy.

The reported effect size (omega squared = 0.25) reinforces the practical significance of this association and suggests that ethical concern is not a peripheral reaction, but a substantial component of respondents’ representations of AI in research.

A very strong and statistically significant relationship is also reported between AI use and scientific field (rho = 0.71; p < 0.001). This indicates that AI adoption is not homogeneous across academic cultures, but is shaped by disciplinary environments, research routines, and the kinds of tasks that dominate different fields.

The descriptive tables support this interpretation: fields organised around structured data, modelling, or computational processing appear more likely to integrate AI intensively, while more interpretive fields seem to combine use with stronger ethical vigilance. The finding should therefore be read as evidence of differentiated integration rather than uniform technological diffusion.

The reported effect size (omega squared = 0.35) suggests a substantial disciplinary structuring of AI use and confirms that field-specific epistemologies help determine how far AI can be operationalised within research practice.

The relationship between perceived importance of AI and staff characteristics is moderate but statistically significant (rho = 0.54; p = 0.003). This indicates that age, academic seniority, and professional socialisation likely influence how researchers assess the practical value of AI tools.

Within the descriptive profile of the sample, the 30-39 age group appears particularly open to integrating AI into research practice, whereas the oldest group is more cautious. The result should not be over-essentialised, but it does suggest that technological adoption is mediated by career stage and by varying degrees of familiarity with digital research environments.

The reported effect size (omega squared = 0.22) indicates that these characteristics account for a meaningful share of the variance in perceived importance. This supports the idea that any policy of AI integration in higher education must be tailored to heterogeneous academic profiles rather than imagined as universally self-evident.

The relationship between perceived drawbacks of AI and researchers’ concerns is likewise strong and statistically significant (rho = 0.66; p < 0.001). The more respondents identify risks such as plagiarism, bias, weak skills, or legal uncertainty, the more they express concern about the long-term implications of AI for scientific work.

The reported effect size (omega squared = 0.31) suggests that this link is substantial. It confirms that concern is not detached from practice: it is nourished by concrete representations of what can go wrong when AI is adopted without sufficient regulation or methodological control.

Finally, the relationship between concerns about AI use and the requirements identified for its optimal use is moderate-to-strong and statistically significant (rho = 0.59; p = 0.001). The reported effect size (omega squared ≈ 0.25) indicates that concern itself helps structure the demand for governance, training, legal framing, and ethical regulation. In this sense, respondents’ anxieties are not merely defensive reactions; they also express a constructive call for institutional organisation.

4. General discussion

Taken together, the results reveal a form of pragmatic acceptance combined with ethical unease. Respondents use AI extensively, and they recognise its value for speed, data processing, language refinement, and reference management. Yet this utilitarian adoption does not translate into unconditional trust. On the contrary, the strongest areas of concern concern precisely those dimensions that define the legitimacy of scientific work: originality, accountability, transparency, and methodological control.

This ambivalence is analytically important. It shows that AI is not perceived simply as a neutral efficiency tool. Researchers distinguish between assistance that supports scholarly work and forms of delegation that threaten critical judgement. In that sense, the “research robot” operates as a boundary figure: it names the point at which technological support risks becoming symbolic replacement. The concern is therefore as much professional and epistemic as it is technical.

The disciplinary differences reported in the data are equally revealing. Fields with strong data-processing cultures appear more ready to integrate AI into everyday research operations, whereas humanities and social-science respondents express higher levels of ethical vigilance. This does not mean that one group is more rational than the other; rather, disciplinary epistemologies shape what counts as acceptable delegation, trustworthy output, and valid evidence.

The findings also indicate that governance cannot be reduced to individual good will. Respondents identify infrastructure, technical skills, legal regulation, and ethical codes as decisive conditions for responsible use. This confirms that the problem is institutional. Researchers may be willing to adopt AI, but without clear policies on disclosure, authorship, data protection, and acceptable uses, such adoption remains normatively unstable.

At a broader level, the study converges with recent scholarship that calls for a human-centred regulation of AI in research. The key issue is not to oppose human intelligence to machine intelligence in a simplistic way, but to maintain a hierarchy of responsibility in which AI remains an instrument under explicit scholarly control. Human oversight, traceable disclosure, and methodological reflexivity must therefore remain non-negotiable.

Several limitations should nevertheless be acknowledged. The sample is purposive rather than probabilistic; the study relies on declared perceptions rather than observed practices; and the survey design does not permit fine-grained causal inference. These limits invite a second-stage study based on raw-data verification, item-level modelling, and comparative analysis across disciplinary clusters or institution types.

Despite these limits, the study makes a meaningful contribution by naming an emerging ethical tension in Arab and Algerian academia. It shows that concerns about AI in research are not reducible to moral panic. They are grounded in lived professional experience, in the perceived fragility of regulatory systems, and in a lucid awareness that the value of science depends not only on the efficiency of its tools, but on the integrity of its procedures and the responsibility of its agents.

5. Main findings

The main findings of the study may be summarised as follows:

    1. The increasing reliance on artificial intelligence in scientific research is accompanied by a high level of concern regarding the possible replacement—or symbolic marginalisation—of the human researcher, especially when AI use is associated with perceived ethical violations.

    2. The sample profile suggests that the 40–49 age group occupies a particularly vigilant position with regard to the ethical risks associated with AI in research, whereas the 30–39 age group appears more open to AI-supported research practices while remaining aware of their implications.

    3. Disciplinary differences play a significant role in shaping both usage patterns and attitudes. Humanities and social sciences appear to combine active use of AI with heightened ethical concern, whereas more data-intensive fields display stronger operational integration and a more instrumental orientation toward AI tools.

    4. Researchers are increasingly using a wide range of AI applications, although the intensity, function, and perceived value of these tools vary across scientific fields and across the different stages of the research process.

    5. From the respondents’ perspective, cybersecurity and data science are the fields that benefit most directly from AI tools, reflecting the stronger compatibility between such technologies and data-driven forms of inquiry.

    6. No marked descriptive gender asymmetry emerges in the reported use of AI, suggesting that the issue is structured more strongly by disciplinary affiliation, research tasks, and professional profile than by gender alone.

    7. Staff characteristics explain a meaningful share of the variance in the perceived practical importance of AI in research, which indicates that age, academic experience, and scientific specialisation influence the way AI is evaluated and integrated into scholarly work.

    8. There is a strong association between awareness of AI-related drawbacks and heightened concern about the future place of the human researcher, which confirms that practical experience with AI does not eliminate ethical anxiety, but may in fact intensify it.

    9. The overall pattern of results underscores the need for institutional safeguards, explicit norms, and capacity-building policies capable of reducing risks while preserving the benefits of technological assistance.

Conclusion

This article has examined the ethical implications of AI use in scientific research and the fears associated with the possible emergence of the “research robot” in Algerian universities. The reported data show a high level of AI use among respondents, especially in tasks related to data handling, language support, reference work, and analytical processing. At the same time, the strongest anxieties concern plagiarism, the weakening of ethical standards, data bias, the absence of legal regulation, and the erosion of the human researcher’s symbolic centrality.

The reported correlations suggest that ethical concern is not marginal. It is structurally connected to usage patterns, disciplinary cultures, perceived drawbacks, and the institutional conditions of responsible adoption. In this sense, the problem is not simply technological; it is methodological, ethical, and political. AI redistributes expertise, responsibility, and trust within the research process.

The study therefore supports a human-centred model of AI integration in which AI remains a powerful but governed instrument. Such a model should rest on at least five principles: explicit disclosure of AI use; verification of generated or processed content by the researcher; methodological traceability; institutionally framed ethical guidance; and sustained training in both technical use and research integrity. Without these conditions, the expansion of AI risks producing a form of accelerated but weakened science.

Future work should move beyond perception studies toward comparative analyses of actual practices, disciplinary cultures, policy effects, and institutional capacity. Yet even at its current stage, the article makes clear that preserving scientific quality in the AI era requires more than adoption. It requires governance, reflexivity, and the reaffirmation of the human researcher as the ultimate bearer of responsibility.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). ACM. https://doi.org/10.1145/3442188.3445922

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008

Jin, Y., Yan, L., Echeverria, V., Gašević, D., & Martinez-Maldonado, R. (2025). Generative AI in higher education: A global perspective of institutional adoption policies and guidelines. Computers and Education: Artificial Intelligence, 8, 100348. https://doi.org/10.1016/j.caeai.2024.100348

Khan, A. A., Badshah, S., Liang, P., Khan, B., Waseem, M., Niazi, M., & Akbar, M. A. (2022). Ethics of AI: A systematic literature review of principles and challenges. In Proceedings of the International Conference on Evaluation and Assessment in Software Engineering (pp. 383–392). ACM. https://doi.org/10.1145/3530019.3531329

McCarthy, J. (2007). What is artificial intelligence? Stanford University.

Resnik, D. B., & Hosseini, M. (2025). The ethics of using artificial intelligence in scientific research: New guidance needed for a new tool. AI and Ethics, 5, 1499–1521. https://doi.org/10.1007/s43681-024-00493-8

Rogers, E. M. (2003). Diffusion of innovations (5th ed.). Free Press.

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. UNESCO.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540

Djamila Benzidoun Brahimi

Research Lab Media, Social Uses and Communication (MUSC), National Higher School of Journalism and Information Sciences (Algeria), benzidon.djamila@ensjsi.dz

Kherfia Djoudi

Research Lab Media, Social Uses and Communication (MUSC), National Higher School of Journalism and Information Sciences (Algeria), djoudi.kherfia@ensjsi.dz

© Tous droits réservés à l'auteur de l'article