A survey by Juris Education, a national law school admissions consulting firm I co-founded after graduating from Dartmouth, brought to light a paradox: nearly 40% of pre-law students said they weren’t comfortable sharing sensitive, mental‑health information with artificial intelligence chatbots, yet 13% of them were doing it anyway. Does this point to the ubiquitous nature of AI as an easily available crutch for students’ emotional struggles? Or is it an outcome of the lack of professional human help available to students on campus?
While the survey data is reflective of this generation’s love-hate relationship with AI, it also is a wake-up call for institutions like Dartmouth to get serious about AI literacy, policy and support on campus. There are many steps Dartmouth can take today to be a leader in this field.
First, integrate AI into admissions policy. Even with AI’s rampant use among law school applicants, institutional policies haven’t kept pace. Only 28% of colleges overall have formal AI guidelines, and another 32% are still developing them. Dartmouth can step in here by allowing AI use in applications to evaluate originality, critical thinking and ethical judgment. It’s in both the students’ and the institutions’ best interests to look at ways to integrate AI literacy into the process rather than treat it only as a disruptive force. This can also help prepare students for an AI-first professional world, where they will need to make educated decisions around the responsible use of the technology anyway on an everyday basis.
Second, understand the mental health benefits of cognitive task delegation. Many of our senior admissions consultants, who are also noted attorneys, confirm that AI is being used for rudimentary legal research and administrative tasks or even contract creation. This is also true for other professions. It’s high time that universities like Dartmouth accept the practicality of AI use rather than debating whether it’s all good or all bad, especially since it has benefits for future professionals.
Offloading repetitive and manual tasks to artificial intelligence can allow users to have clarity around the guilt-free use of AI, free up their time for more specialized tasks, and as a result, reduce stress and burnout levels. Dartmouth must begin to model this at the higher education level by effecting clear guidelines and policies related to the responsible use of AI in the classroom, assignments, exams and so on. This makes students feel more supported and prepared for the professional world.
Third, provide better emotional support systems. Students turning to AI for mental health support exposes flaws in the current university mental health support systems, marked by a dearth of available resources, access to them or their unaffordability. Universities like Dartmouth should improve the quality of its mental health resources for students, ensuring that therapy is accessible and any related stigma is addressed. At the same time, counseling centers can partner with or create their AI tools — as Dartmouth has with Evergreen — to offer guided and supervised services for low-risk tasks paired with human oversight. With such tailored solutions, students can get the help they need while also reducing their paranoia around sharing sensitive information with AI bots produced for a mass audience.
Finally, create awareness around overreliance on AI. Academic institutions like Dartmouth are responsible for creating playbooks to govern the use of AI in the classroom so students can emerge as well-prepared and ethical individuals in their professions.
We must remember that AI’s role, be it in mental health solutions or supporting cognitive tasks, is still growing. The more prominent its use becomes, the more we’re faced with nuanced privacy concerns, protecting proprietary information and confidentiality. It is quintessential that students and professionals are equally made cognizant of these pitfalls to avoid uninformed use of AI, without human intervention.
Writing about the survey at Juris Education isn’t to alarm students and universities. Instead, it presents a chance for educational institutions to prepare the next generation of professionals for a world where AI is unavoidable. If we ignore this reality or act too late, we risk producing a cohort who rely on AI reluctantly or incorrectly, potentially compromising both their well-being and professional competence.
Arush Chandna is a member of the Dartmouth Thayer School of Engineering Class of 2016 and a co-founder of Juris Education, a leading law school admissions consulting firm.



