I wonder if I put a few words here?

Category: Assignment 1 – Midterm Review – Module 1&2 Blog Posts and Comments

Reflections on Generative AI – Module 2

Dalle-E 3 – Generative Image Test

For the purpose of this module, I decided to explore one of the image generating AI tools – Dall-E 3. Initially, I wanted to explore Dalle-E 2, but discovered quickly that it was no longer accessible given the release of the newer version 3. With no particular ideas in mind, I used the tool to generate a random image of a famous athlete.

I wanted to test the accuracy of the image generation with a prompt that I felt would lead to some inaccuracies. Firstly, I gave a limited description of the environment and kept it pretty open ended, which I assumed would result in some discrepancies. Secondly, I made sure to generate a person, as I had previously heard that AI image generation found certain aspects of human anatomy difficult to properly generate, especially hands.

“Generate an image of Tiger Woods swinging a driver on a famous golf course, stylized to look like an anime.” prompt. Dall-E 3 / ChatGPT, 12 Oct. 2024, chatgpt.com.

SAMR Model – Analyzing Generative AI for Learning

Using ChatGPT, this SAMR Model was generated to analyze the usage of generative AI as a learning tool:

Substitution:

Generative AI simply replaces traditional tools without significant changes to the learning process.

  • Example: Instead of looking up definitions in a dictionary or creating visuals manually, learners use AI to generate text summaries or images.
  • Impact: Time-saving, but no major shift in learning outcomes. AI functions as a more efficient tool, similar to traditional resources.

Augmentation:

Generative AI improves traditional tasks by adding functional enhancements.

  • Example: AI tools are used to generate detailed feedback on student writing, helping refine language, grammar, and clarity more efficiently than traditional methods.
  • Impact: Learning is enhanced by faster feedback and improved personalization of tasks, but the core task (e.g., essay writing) remains the same.

Modification:

Generative AI allows for significant task redesign, transforming the learning process.

  • Example: Instead of merely solving problems or writing essays, students use AI to generate creative narratives, research papers, or project ideas. The AI can assist with brainstorming and drafting complex tasks, which can then be edited or expanded by the student.
  • Impact: AI modifies how learning happens, promoting creativity, exploration, and deeper engagement with materials. Students focus more on critical thinking, analysis, and refining generated content.

Redefinition:

Generative AI enables entirely new tasks that were previously unimaginable.

  • Example: AI is integrated into the learning environment where students can interact with virtual AI tutors, generate simulations, create immersive environments, or even develop new kinds of multimedia presentations. Students can work with AI to co-create personalized projects, simulations, or predictive models based on real-time data.
  • Impact: AI redefines the learning experience by fostering collaboration with intelligent systems, expanding learning beyond traditional boundaries. This creates opportunities for personalized, exploratory, and interdisciplinary learning, while engaging learners in tasks that transcend traditional limits of creativity and critical thinking.

“Generate a SAMR Model assessment of the usage of generative AI for learning.” prompt. ChatGPT, 12 Oct. 2024, chatgpt.com.


Reflection

This was an interesting module and expanded on my growing usage of generative AI by teaching me how to evaluate my usage of these technologies, as well as understanding how to cite the information and “quantify” it, in a sense.

What Generative AI applications have you found useful?
– My most used AI applications are GitHub Copilot and ChatGPT. I’ve used Copilot quite extensively for coding and debugging simple tasks, which it performs better than ChatGPT for as it has direct access to certain developmental tools. However, ChatGPT is very helpful for organizational asks, summarizing text, and other day-to-day uses.

What tools did you find useful in your explorations this week and how did you use them?
– Dall-E 3 was very interesting to play around with. I wasn’t previously aware that it was integrated into ChatGPT as a means to shape / tweak the prompt, and I previously thought it was a standalone application. That said, it generated very interesting artwork and I could see it having a variety of uses – generating random art for inspiration, wallpapers for electronic devices, etc.

How accurate or successful were the learning objects you created using the AI tools?
– Generally, the information generated from text-based generative AI such as ChatGPT are relatively accurate, especially when compared to the various image generating tools. My hypothesis is that ChatGPT and other text-based applications can be tweaked and fine-tuned with clever prompt construction, while image generative AI seems to be lacking in certain areas (many of instances I generated people resulted in strange abnormalities, especially with the hands).

What might you use AI tools for moving forward? What would you not use them for?
– I fully plan on continuing to use ChatGPT, GitHub Copilot, and now Dall-E 3 moving forward.

Screencasting and CTML Principles – Module 1

Flexbox in CSS – a godsend for web developers

For this module, I’ve created a screencast and a short presentation on one of my favorite modules in CSS / web development. In the presentation, I discuss and demonstrate a few properties of Flexbox as well as how they can be combined in dynamic ways.

Reflection

  • Regarding all of the principles I’ve learned this week of CTML, the personalization principle seemed the most intuitive. I’ve generally found it to be easier to learn when ideas as presented more casually and conversationally, which generally comes hand-in-hand with a speaker that is skilled at simplifying / breaking down more complex topics. On the other hand, I was surprised to learn about the image principle – generally, when reviewing lecture videos in the past, I always found it helped me focus when the professor was visible on screen as well as narrating.
  • I found that implementing dual coding theory was very natural, as it helps to use images/diagrams to describe a topic rather than a chunk of words. It was difficult, however, to avoid redundancy especially given that the information and properties I discussed are typically something you’d want to visually see to reference.
  • My target audience was a first year University class on CSS and/or CSC, so I aimed to provide some context while also assuming some givens such as that students would know where/how to set up configurations.
  • In the past, I’ve generally followed the dual coding theory where I can. I’ve always made sure to include multiple slides with no words and just images/diagrams, as I believe that being able to describe and share information off of a diagram is a good way to demonstrate understanding. I will have to work on minimizing redundancy, as I’ve generally adopted the mindset of having everything written down even as I’m saying it (which helps me keep track of the content, if I forget something).

Introduction

My name is Kevin! I’m a 4th year computer science student at the University of Victoria. I decided to take EDCI 337 as I felt it could be an interesting new perspective on design practices in different forms of media. From the lens of a computer science student, much of my experience in terms of design has been through web design and development – I believe this course could provide some insight into different ways to approach this, and might leave me with some interesting takeways that I can apply in a real-world environment.