Strategies for using Generative AI practically, ethically, and safely at Boise State University

Strategies for using Generative AI practically, ethically, and safely at Boise State University

This article highlights campus guidelines for responsible AI use and provide actionable strategies to integrate AI into your workflows securely. Includes clear, practical strategies to ensure your AI-powered work aligns with best practices for data protection and ethical use.

Campus Guidelines

Boise State Guidelines

Generative Artificial Intelligence (AI) – Guidance on Use and Applicable Policies

Boise State University has established rules and policies regarding the use of Generative AI in our unique workplace. These resources help keep us and the university safe while still being able to use cool new tools.


🧩 1) Information Privacy and Data Security (Policy 8060)- It is up to us to protect sensitive data when using AI.

⚙️ 2) Intellectual Property and Copyright - Understand the implications of using AI-generated content.

Simple Summary

Detailed Summary

Applicable Boise State Policies

Simple Summary

Detailed Summary

Applicable Boise State Policies

You may not use AI tools or services to infringe copyright or other intellectual property rights.

System IT resources may not be used to violate copyright or other intellectual property laws. Entering copyrighted material into a generative AI tool or service may effectively result in the creation of a digital copy, which is a copyright violation. Feeding copyrighted material into a generative AI tool or service could “train” the AI to output works that violate the intellectual property rights of the original creator. In addition, entering research results into a generative AI tool or service could constitute premature disclosure, compromising invention patentability.

 

 

🔁 3) Use AI Ethically

Simple Summary

Detailed Summary

Applicable Boise State Policies

You may not upload any data that could be used to help create or carry out malware, spam and phishing campaigns or other cyber scams.

System IT resources may not be used to disseminate unauthorized email messages.

University Policy 8000 (Information Technology Resource Use)

You may not direct AI tools or services to generate or enable content that facilitates sexual harassment, stalking or sexual exploitation or that enables harassment, threats, defamation, hostile environments, stalking or illegal discrimination.

University policy 1060 prohibits discrimination or harassment on the basis of protected class. University Policy 1065 prohibits sexual harassment, stalking, dating violence, and domestic violence. University Policy 1075 prohibits discrimination on the basis of disability.

University Policy 1060 (Non-discrimination and Anti-harassment)

University Policy 1065 (Sexual Harassment, Sexual Misconduct, Dating Violence, Domestic Violence, and Stalking)

University Policy 1075 (Non-discrimination on the Basis of Disability)

You may not use AI tools or services to generate content that helps others break federal, state or local laws; institutional policies, rules or guidelines; or licensing agreements or contracts.

System IT resources may not be used to violate laws, policies or contracts.

University Policy 8000 (Information Technology Resource Use)

You may not use AI tools to engage in illegal activity in violation of federal, state, or local law, including but not limited to Idaho Code § 67-6628A (Electioneering Communications – Use of Synthetic Media), Idaho Code § 18-6606 (Disclosing Explicit Synthetic Media), and Idaho Code § 18-1507C (Visual Representations of the Sexual Abuse of Children).

System IT resources may not be used to violate laws such as distributing certain deepfakes (realistic AI-generated videos or audio) without consent if it causes harm or is used for fraud; deceptively representing through synthetic media a political candidate’s action or speech in an electioneering communication; and using generative AI to produce, distribute, receive, or possess visual depictions of a child engaging in explicit sexual conduct.

University Policy 1065 (Sexual Harassment, Sexual Misconduct, Dating Violence, Domestic Violence, and Stalking)

University Policy 7030 (Reporting Waste and Violations of Law, Regulation, or University Policy)

University Policy 7070 (Employee Political Activity)

University Policy 8000 (Information Technology Resource Use)

Practical Strategies

Workflow Integration

Integrating AI works best with intentional strategy. Here are some workflow integration best practices for using AI at work:

🧩 1. Start with Clear Use Cases

  • Identify repetitive, time-consuming, or data-heavy tasks AI can help with.

  • Common examples:

    • Drafting or summarizing emails or documents

    • Brainstorming or outlining content

    • Analyzing data or trends

    • Generating meeting summaries or notes

    • Customer support or internal help desk chatbots

⚙️ 2. Embed AI into Existing Tools

  • Use AI where your team already works:

    • Microsoft 365 Copilot or Google Workspace AI for docs/emails

    • ChatGPT or other LLMs via Slack, Teams, or browser extensions

    • CRM or ticketing systems with AI plugins

🔁 3. Create Repeatable Workflows

  • Develop standard prompts/templates for recurring tasks.

    • Example: Weekly status report summary

    • Prompt library for team use = huge time saver

  • Automate where possible.

🧠 4. Keep a Human in the Loop

  • Use AI for drafting, not final decisions.

  • Always review and edit outputs—especially when accuracy, tone, or ethics matter.

  • AI should augment, not replace, your judgment.

🔍 5. Maintain Transparency and Documentation

  • Log how and where AI is used in workflows.

  • Share AI-generated content sources when relevant (especially for public or academic work).

  • If collaborating, note if something was AI-assisted so expectations are clear.

🔐 6. Respect Privacy & Security

📈 7. Evaluate & Adjust Regularly

  • Track time saved, quality improved, or feedback received.

  • Tweak your workflows and AI prompts based on results.

  • Stay up to date on new features—AI evolves fast.

🧑‍🏫 8. Train Your Team

  • Offer quick-start guides, prompt libraries, or lunch-and-learns.

  • Normalize experimenting—but also coach on limitations and ethics.

  • Encourage peer sharing of what’s working.


Data Input & Output Management

DATA INPUT: Putting Information into AI Tools

  • Be Clear and Specific

    • Write prompts like you’re giving instructions to a new assistant.

    • Example: Instead of “summarize this,” say “Summarize this report in 3 bullet points for my manager.”

  • Use Bullet Points or Lists

    • If you have a lot of information, break it into a simple list. AI handles organized content better.

  • Feed It Clean Data

    • Remove unnecessary formatting, duplicates, or unrelated notes before pasting in.

    • Think: "Would a human find this confusing?"

  • Break It Into Chunks

    • Large documents? Feed them in sections—AI remembers better when it's not overwhelmed.

  • Label Things Clearly

    • Use headings like “Customer Feedback,” “Sales Data,” or “Meeting Notes” to help the AI know what it’s reading.

DATA OUTPUT: Getting Useful Results from AI

  • Ask for the Format You Want

    • Example: “Give me this as a table,” or “Summarize this into an email.”

  • Review and Edit the Output

    • AI helps you start, but always give it a human touch before sharing.

  • Save Reusable Prompts

    • Got a prompt that works well? Save it in a doc or sticky note for next time.

  • Ask for Multiple Versions

    • Not sure what tone you want? Ask AI to give you a casual version and a formal one.

  • Verify Numbers and Facts

    • AI can "hallucinate" facts. Always double-check data or quotes if accuracy matters.

BONUS TIPS: Make Life Easier

  • Use Templates

    • Create a standard format for reports, summaries, or updates—and ask AI to “fill in the blanks.”

  • Add Context

    • Tell the AI who the audience is (e.g., “This is for a beginner” or “This is for my boss”) so it tailors the output.

  • Use it to Clean Up Messy Data

    • Copy-paste a jumbled list or table and ask AI to organize, reformat, or alphabetize it.

  • Turn Raw Data Into Charts

    • You can ask AI to summarize trends or even write a short analysis for your spreadsheet or presentation.


Use Boise State University selected AI Tools


Practical Examples of how to use AI Tools Responsibly

Here are 5 real-world examples of people using AI at work in practical, ethical, and safe ways:

✅ 1. Summarizing Meeting Notes for the Team

How it works: After a Zoom call, someone pastes their rough notes into ChatGPT and asks it to create a clear summary with action items.
Why it’s ethical/safe: No personal or sensitive info is shared, and the final summary is reviewed by a human before being sent out.

✅ 2. Drafting Internal Communications

How it works: A communications specialist uses AI to draft emails or newsletters, then edits for tone and accuracy.
Why it’s ethical/safe: They never copy/paste confidential content directly into AI tools, and final messages are human-approved.

✅ 3. Helping Write Job Descriptions

How it works: An HR staffer gives AI the key duties and asks for a professional-sounding job description to refine.
Why it’s ethical/safe: They supply neutral, factual inputs and double-check the language for bias or clarity before posting.

✅ 4. Creating a Training Guide from a Policy Document

How it works: A team member feeds in a long policy and asks AI to create a beginner-friendly guide or FAQ.
Why it’s ethical/safe: They use publicly shareable material and ensure the final output is accurate and accessible.

✅ 5. Organizing Survey Results

How it works: A staffer pastes in open-ended responses from a staff survey and asks AI to summarize the key themes.
Why it’s ethical/safe: They remove names or identifiable info first, and use the AI to assist—not replace—real analysis.

Risk Mitigation and Privacy Safeguards

Identifying Potential Risks and Mitigation

10 Risks of Using AI in the Workplace

Mitigation Strategies for AI Risks in the Workplace

10 Risks of Using AI in the Workplace

Mitigation Strategies for AI Risks in the Workplace

  1. Data Privacy Violations

➤ Sharing sensitive or confidential information with AI tools can risk exposure.

➤ Only input non-sensitive, anonymized data into AI tools. Use approved, secure platforms and follow organizational data policies.

  1. Inaccurate or Misleading Output

➤ AI sometimes "hallucinates" facts or makes incorrect assumptions.

➤ Always review AI-generated content. Treat AI as a first draft or assistant, not a source of truth.

  1. Bias in AI Responses

➤ AI can reflect or amplify biases present in its training data.

➤ Review outputs critically, especially around sensitive topics. Encourage diverse review and use bias detection tools where available.

  1. Over-Reliance on AI

➤ Relying too much on AI may reduce human critical thinking or judgment.

➤ Use AI to support—not replace—human decision-making. Build processes that include human review and collaboration.

  1. Job Displacement or Role Confusion

➤ Misunderstanding how AI affects job roles may create anxiety or misuse.

➤ Communicate clearly about AI’s role in supporting (not replacing) people. Provide training on using AI tools to enhance roles.

  1. Security Vulnerabilities

➤ Using unsecured or unofficial AI tools could expose systems to breaches.

➤ Use only organization-approved AI platforms. Avoid browser plugins or apps without proper vetting.

  1. Lack of Transparency or Explainability

➤ AI decisions can be difficult to interpret, especially with complex models.

➤ Choose tools that allow you to understand how outputs are generated. Document how AI is used in important decisions.

  1. Ethical Use Concerns

➤ Misuse of AI (e.g., for surveillance or decision-making without consent) can create trust issues.

➤ Follow ethical guidelines and get informed consent where needed. Avoid AI in decision-making that affects people’s rights or wellbeing.

  1. Inconsistent Quality of Output

➤ AI results can vary widely in quality depending on prompts or context.

➤ Develop prompt templates, test for quality, and keep a “human-in-the-loop” for all high-stakes tasks.

  1. Compliance & Legal Risks

➤ Improper use of AI could violate workplace policies, accessibility laws, or industry regulations.

➤ Stay informed about legal requirements and AI-related policies in your industry. Consult with legal or compliance teams if unsure.

  • Pro-Tip: Always confirm AI generated content.

Privacy Safeguards

  1. 🚫 Don’t Share Sensitive or Personal Information

    • Never paste personal details (like names, emails, or ID numbers) into AI tools—especially public ones.

    • If you must use real data for testing, anonymize it first.

  2. ✅ Use Approved & Secure AI Tools

  3. 🧹 Remove Metadata and Extra Info from Files

    • Before uploading a document or image, clear any hidden data like author names, comments, or version history.

  4. 🔐 Log Out and Lock Your Device

    • If you’re working with AI on shared or public devices, always log out after use.

    • Don’t let AI tools stay open unattended.

  5. 📚 Know Your Organization’s AI Policy