[{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/ai/","section":"Tags","summary":"","title":"AI","type":"tags"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/authors/","section":"Authors","summary":"","title":"Authors","type":"authors"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/automation/","section":"Tags","summary":"","title":"Automation","type":"tags"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/","section":"Blog","summary":"Project notes, reflections, and experiments from my portfolio journey.","title":"Blog","type":"blog"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/categories/blog/","section":"Categories","summary":"","title":"Blog","type":"categories"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/categories/","section":"Categories","summary":"","title":"Categories","type":"categories"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/","section":"Emil Dagsberg","summary":"","title":"Emil Dagsberg","type":"page"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/authors/emil-dagsberg/","section":"Authors","summary":"","title":"Emil-Dagsberg","type":"authors"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/project-planning/","section":"Tags","summary":"","title":"Project Planning","type":"tags"},{"content":" Introduction # In this project, our group will work with a real-world case based on E.G. and their yearly Christmas market. The goal is to investigate how a digital solution can support the planning, administration, and communication around a large event with many standholders, visitors, activities, and practical details.\nThe project is not only about building an application. It is also about understanding the current workflow, identifying manual processes, and deciding where technology can create actual value. Before choosing technologies or designing features, we need to understand the problem we are trying to solve.\nFrom the information we have gathered so far, the Christmas market involves a lot of coordination. Standholders need to apply, practical information has to be collected, emails are sent back and forth, stand placements must be planned, and visitors need to find their way around the market.\nThis creates an interesting case because it contains both internal administrative problems and external visitor-facing problems.\nThe Overall Goal # The overall goal is to build a digital Christmas market platform that can help E.G. handle parts of the event in a more structured way.\nThe solution we are considering has three main parts:\nA digital standholder application flow Email automation for repeated communication A simple live map for visitors The idea is to reduce manual work while also improving the experience for both the people organizing the event and the people attending it.\nToday, a lot of the work seems to depend on emails, manual checking, copy/paste, and personal knowledge. That can work, but it also makes the process vulnerable. If one person holds most of the information, the process becomes harder to scale and harder to hand over to others.\nWe want to explore how a system can collect information in one place, make it easier to search and filter, and help automate some of the repeated communication.\nThe Problem We Want to Solve # The concrete problem we want to solve is that the current process around standholders appears to be very manual.\nA standholder may need to contact E.G. by email, receive information, fill out details, send them back, and then wait for further communication. On the internal side, this means that someone has to keep track of applications, missing information, categories, approvals, practical needs, and placement.\nThis creates several challenges:\nImportant information may be spread across emails Details may need to be copied manually into lists or documents It can be difficult to get a quick overview of all standholders Repeated emails take time to write and send It can be hard to see which applications are missing information Knowledge from previous years may not be easy to reuse The process becomes vulnerable if too much depends on one person At the same time, visitors also need a good overview of the Christmas market. If the event contains many stands across different buildings and outdoor areas, a static list may not be enough. A live map could make it easier for visitors to find specific stands, food, activities, parking, toilets, or other important locations.\nThe Project Idea # Our working title is:\nThor og Emils Julemarkeds løsning\nThe project idea is to build a digital platform that connects the standholder process with a visitor-facing map.\nIn the first version, a standholder should be able to apply through a digital form. The application should then appear in an admin dashboard, where the event organizer can review it, change its status, and see an overview of all applications.\nWhen a standholder is approved, the information could later be used on a public live map. This means the same data can serve two purposes:\nInternally, it helps with planning and administration. Externally, it helps visitors navigate the market. This is important because it prevents the same information from being written multiple times in different places.\nStandholder Application # The standholder application flow would replace or reduce the need for back-and-forth email communication at the beginning of the process.\nInstead of asking for a form through email, a standholder could fill out a form directly.\nThe form could include information such as:\nName of standholder or company Contact person Email and phone number Product description Category Previous participation Need for electricity Preferred indoor or outdoor placement Links to website or social media Images of products or stand setup Acceptance of practical rules Relevant documentation if needed This would give the organizer more structured data from the beginning. It also makes it easier to validate that required information is included before the application is submitted.\nAdmin Dashboard # The admin dashboard would be the internal tool for managing applications.\nThe organizer should be able to:\nSee all submitted applications Search and filter applications Open details for a single standholder Change status, for example new, missing information, approved, or rejected Add internal notes See categories and practical needs Decide which approved standholders should appear publicly The dashboard is important because it turns many separate emails into one structured overview.\nA useful part of the dashboard could be a status-based workflow. For example, if an application is missing information, the system could mark it clearly and generate a suggested email response. If the application is approved, the system could prepare an approval message and make the standholder available for the live map.\nEmail Automation # Email automation is one of the most relevant parts of the project because repeated communication seems to be a large part of the current process.\nThe goal is not necessarily to let the system send every email automatically without human control. A more realistic first version would be to generate email drafts that the organizer can review, edit, and send.\nThis gives the benefits of automation while still keeping the human in control.\nExamples of email automation could be:\nConfirmation when an application is received Request for missing information Approval email Rejection email Reminder before the event Practical information before setup Message when the public standholder list or map is available For the MVP, we could keep this simple. When the status of an application changes, the system generates a relevant email draft.\nFor example, if an application is marked as “missing information,” the system could suggest a message asking for the specific missing fields. If the application is approved, the system could generate a short approval email with practical next steps.\nThis would reduce repeated writing and help make the communication more consistent.\nLive Map # The live map is the visitor-facing part of the solution.\nThe idea is that approved standholders can be shown on a simple interactive map. Visitors should be able to find stands, categories, activities, and practical locations more easily.\nThe first version does not need to be a complex GPS-based map. It could be a simple illustrated map or layout with clickable areas.\nThe map could include:\nIndoor areas Outdoor areas Standholders Food areas Activities Toilets Parking Information points Visitors could search for a product or category, for example “jewelry,” “ceramics,” or “food,” and then see where relevant stands are located.\nThis part of the project is interesting because it uses data from the admin system. When a standholder is approved and assigned a location, that information can also be used publicly. That creates a connection between planning and visitor experience.\nUse of AI # AI should support the workflow, not replace the organizer.\nThe most realistic use of AI in this project is as an assistant that helps process text and generate suggestions.\nPossible AI features include:\nSummarizing standholder applications Suggesting a category based on the product description Detecting missing information Generating email drafts Helping answer simple questions about standholders or categories For example, if a standholder writes a long product description, AI could create a short internal summary. This would make it faster for the organizer to review many applications.\nAI could also suggest a category. If an application describes handmade candles, decorations, and Christmas ornaments, the system might suggest “Christmas decoration” or “crafts.” The organizer should still be able to override the suggestion.\nFor email automation, AI could generate a draft based on the application status. This would be useful because many emails probably follow the same structure but still need small adjustments.\nA possible AI-generated output could include:\nShort summary Suggested category Missing information Suggested email draft This keeps the AI feature focused and useful.\nTechnology Considerations # Since this is a school project, the technology choices need to match both the scope and the time available.\nA possible frontend could be built with React. React would make sense because the project needs multiple views, form handling, state changes, filtering, and a dynamic map interface.\nThe backend could be built with Java and Javalin, since that fits well with the technologies we have already used in our education. The backend would handle applications, statuses, users, categories, and possibly email templates.\nFor the database, PostgreSQL would be a good choice because the project contains structured data. Standholders, applications, categories, statuses, notes, and map locations can all be represented clearly in relational tables.\nA possible tech stack could be:\nReact for the frontend Java/Javalin for the backend PostgreSQL for the database REST API between frontend and backend OpenAI API or another AI API for summaries and email drafts Simple SVG or image-based map for the live map For the MVP, I would avoid making the map too technically advanced. A static SVG or image with clickable points is probably enough to demonstrate the idea. The goal is not to build Google Maps, but to show how standholder data can become useful for visitors.\nScope # One of the most important parts of this project is controlling the scope.\nThe full idea could become very large if we try to include everything:\nFull standholder portal Login system Admin dashboard Email automation AI assistant Live map Visitor chatbot File upload Payment GDPR tools Historical data from previous years Drag-and-drop stand placement Real-time updates That is too much for a first version.\nInstead, the MVP should focus on the smallest flow that demonstrates the value of the system.\nMVP # The MVP could be:\nA standholder fills out a digital application form The application is saved in the system The organizer can view it in an admin dashboard AI generates a short summary and suggested category The organizer can change the status of the application The system generates an email draft based on the status Approved standholders can be shown on a simple live map This MVP demonstrates the full journey:\nApplication → administration → AI support → email draft → approval → public map\nIt is small enough to build, but still complete enough to show the idea.\nWhat We Will Not Build # To keep the project realistic, there are several things we should not build in the first version.\nWe will not build:\nA full payment system Automatic sending of all emails without approval Advanced GDPR administration GPS-based navigation A full visitor app A complete chatbot for all visitor questions Drag-and-drop stand planning Real integration with E.G.’s live website Multi-year historical analysis Full document management These could be future improvements, but they should not be part of the first version.\nThe main goal is to prove that the workflow can be improved with a more structured digital system.\nQuestions We Still Need Answered # Before building too much, we need to clarify some things.\nImportant questions include:\nHow many standholders usually participate? How many applications are received? What information must a standholder provide? Are there existing forms or templates? What emails are sent most often? Should emails be sent automatically or only generated as drafts? What categories are used for standholders? Does E.G. already have a map or floor plan? Should stands be placed on exact locations or only areas? What information should visitors see on the live map? Should the public map include activities and practical locations? Which part of the process takes the most time today? The answers to these questions will help us decide what should be included in the MVP and what should be left out.\nMy Current Thoughts # I think the strongest version of this project is not just a form or just a map. The interesting part is connecting the internal workflow with the public visitor experience.\nIf standholder information is collected digitally from the beginning, it can be reused throughout the process. The same data can help with administration, emails, planning, and the public map.\nThat makes the project more valuable than a single isolated feature.\nAt the same time, we need to be careful not to build too much. The MVP should focus on a clear and demonstrable flow. If we can show how one standholder moves from application to approval and then appears on a live map, we have already demonstrated the core idea.\nAI can then be added as a practical assistant in the places where it makes sense: summaries, categories, missing information, and email drafts.\nConclusion # This project is about exploring how a digital platform can support a real event workflow. For E.G., the Christmas market involves many practical tasks, many people, and a lot of communication. That makes it a good case for looking at automation, structured data, and AI-assisted administration.\nOur goal is to build a solution that reduces manual work, creates a better overview, and makes information easier to reuse. The first version should not try to solve everything. Instead, it should demonstrate a focused workflow from standholder application to admin handling, email support, and a simple visitor-facing live map.\nIf we manage to keep the scope under control, this could become a strong project because it solves a concrete problem while still giving us room to work with relevant technologies such as React, backend APIs, databases, automation, and AI.\n","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/project-start/","section":"Blog","summary":"A reflection on our upcoming project for E.G., where we explore how digital tools and AI can support standholder administration, email automation, and visitor navigation.","title":"Project Start: A Digital Christmas Market Platform","type":"blog"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/","section":"Tags","summary":"","title":"Tags","type":"tags"},{"content":"","date":"12 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/web-development/","section":"Tags","summary":"","title":"Web Development","type":"tags"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/coding-agent/","section":"Tags","summary":"","title":"Coding Agent","type":"tags"},{"content":"","date":"11 May 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/spec-driven-development/","section":"Tags","summary":"","title":"Spec-Driven Development","type":"tags"},{"content":" Introduction # As AI coding agents become more useful in software development, the way we describe tasks to them also becomes more important. It is no longer enough to only say “build this feature” and hope that the result matches what we had in mind. The better the instructions are, the better the agent can work.\nThis is where spec-driven development becomes interesting.\nSpec-driven development is a workflow where the developer starts by writing a clear specification before writing or generating the code. A spec describes what the application should do, how it should behave, what the user should experience, and sometimes what technical constraints the solution should follow.\nWhen working with AI code agents, specs can become the bridge between the human idea and the generated implementation. Instead of relying on vague prompts, the developer can give the agent a structured description of the goal.\nWhat Is Spec-driven Development? # Spec-driven development means that the specification becomes the starting point for development. The spec can describe things like:\nThe purpose of the application The main user flows The features that must be included The data the application should use The expected behavior in different situations Error handling Technical requirements What should not be included In traditional development, a spec can help a team agree on what needs to be built before coding begins. With AI-assisted development, the spec becomes even more useful because it gives the coding agent a clearer target.\nA simple prompt might be:\nBuild a quiz app.\nA better spec-driven prompt might explain:\nBuild a React quiz app where users can answer one question at a time, receive feedback after each answer, save progress in local storage, review answers at the end, and reset the quiz. The app should not require login, and the questions should be stored in a separate data file.\nThe second version gives the code agent much more context. It reduces guesswork and makes it easier to evaluate whether the result is correct.\nWhy Specs Matter When Using Code Agents # AI code agents can generate code quickly, but they do not automatically know the full intention behind a project. If the instructions are unclear, the agent may make assumptions. Sometimes those assumptions are useful, but other times they can lead to features, structures, or design choices that do not fit the project.\nSpecs help because they make the task more concrete.\nA good spec can answer questions such as:\nWhat problem is the application solving? Who is using it? What should happen first? What should happen after the user completes a task? What data should be saved? What should happen if something goes wrong? What is the minimum version that should work? This makes it easier for the code agent to build something useful. It also makes it easier for the developer to review the result, because the finished code can be compared directly with the spec.\nHow I Could Use Specs With Code Agents # I could imagine using specs as the first step in almost every AI-assisted coding project.\nBefore asking a coding agent to create files or write code, I would first describe the project in a structured way. For example, if I wanted to build a small task management app, the spec could include:\nUsers can create, edit, and delete tasks Tasks have a title, description, status, and due date Tasks are saved in local storage The app has filters for all, active, and completed tasks The design should be simple and mobile-friendly The first version should not include authentication or a backend After writing the spec, I could ask the coding agent to create a plan before coding. This would help check whether the agent understood the task correctly.\nThen the agent could generate the first version of the app based on the spec. If something is missing, I would not need to explain everything again. I could point back to the spec and say which requirement has not been fulfilled.\nThis makes the workflow more controlled.\nUsing Logs Together With Specs # Specs describe what should happen. Logs can show what actually happened.\nThat is why I think specs and logs can work very well together when using code agents. A spec gives the agent the intended behavior, while logs give the agent real information from the application or development environment.\nLogs can include things like:\nBuild errors Runtime errors Console output Test results API responses User interaction problems Deployment errors When something breaks, the code agent can compare the error logs with the original spec. This gives it more context than just the error message alone.\nFor example, if a quiz app is supposed to save answers in local storage, but the browser console shows an error when saving, the agent can use both the spec and the log:\nThe spec says answers should be saved locally The log shows where the save function fails The agent can inspect the code and suggest a fix The developer can test whether the behavior now matches the spec This creates a feedback loop between intention, implementation, and real behavior.\nA Possible Workflow # A spec-driven workflow with a code agent could look like this:\nWrite a short project idea Turn the idea into a structured spec Ask the code agent to review the spec and create an implementation plan Let the agent build the first version Run the application Collect logs, errors, and test results Give the logs back to the agent Ask the agent to fix the implementation based on the spec Review the code manually Update the spec when the project changes This workflow would make the code agent more like a development partner. Instead of only generating code from a single prompt, the agent would work with a living description of the project.\nThe spec would guide the direction, and the logs would help correct the implementation.\nExample: Building an Application With Specs and Logs # Imagine I want to build a simple habit tracker.\nThe first spec could describe the core features:\nUsers can add habits Users can mark habits as completed each day The app shows a weekly overview Data is stored locally in the browser The app works without login The design should be clean and easy to use on mobile The coding agent could then build the app based on this spec.\nAfter testing it, I might discover from the console logs that the app saves completed habits incorrectly when the date changes. Instead of just saying “fix the bug,” I could give the agent the spec and the log:\nSpec requirement: A habit should be marked as completed only for the selected day. Observed problem: When I mark a habit as completed today, it also appears completed for other days. Console/log output: [example error or state output] ","date":"11 May 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/spec-driven-develoment/","section":"Blog","summary":"An introduction to spec-driven development and how specs, logs, and code agents can be used together when building applications.","title":"Spec-driven development with code agents","type":"blog"},{"content":" Introduction # In this project, we built a small AI-driven application that can provide an initial, guidance-oriented assessment of a student internship report. The purpose was not to create an \u0026ldquo;automatic true grading system,\u0026rdquo; but to explore how an external LLM API can become part of a concrete data flow inside an application.\nWe were given three kinds of material: learning objectives, report requirements, and the description of Dare-Share-Care. Based on that, we had to derive our own rubric, design prompts, build a backend call to a language model, and return a structured response. We also chose to build a simple frontend in React, so the user can either paste text directly or upload a .md or .txt file.\nWe used a coding agent as part of the development process. That helped us move faster from idea to working prototype, but it did not remove the need for our own decisions. We still had to decide on the rubric, prompt design, output format, and what the system should actually be used for. The coding agent was especially helpful for boilerplate, structure, and fast iteration, while our main contribution was defining the requirements and evaluating the quality of the solution.\nProblem and Goal # The goal of the solution was to build an application that can:\nreceive a report text apply a rubric derived from the assessment material send a prompt to an LLM via API receive a response in a structured format return feedback that can be used as a starting point for further dialogue It was important to us that the output should be presented as guidance-oriented feedback, not as a final grade or official assessment.\nOur Rubric # We derived the rubric directly from the three source documents. Instead of trying to assess \u0026ldquo;everything,\u0026rdquo; we chose five criteria that covered the central requirements in the material:\nCompany context and daily practice\nDoes the report describe the company, the work context, and the student\u0026rsquo;s insight into day-to-day operations?\nTasks, methods, and technical work\nDoes the report explain concrete tasks, methods, technologies, and technical choices?\nLearning goals and theory-practice link\nDoes the report clearly show how learning objectives and theory from the education were connected to practice?\nReflection and personal development\nDoes the report contain real reflection on development, challenges, and learning?\nValue creation and Dare-Share-Care\nDoes the report show what value the student created, and how Dare, Share, and Care are demonstrated?\nFor each criterion, we described low, medium, and high goal fulfillment. We also assigned weights, so technical work, learning goals, and reflection had the greatest importance.\nPrompt Design # We worked with two prompts: a system prompt and a user prompt.\nSystem Prompt # The system prompt defined the model as an academic assistant that should provide an initial assessment of an internship report. It instructed the model to:\nonly use the rubric and the provided report text not act as an official examiner make uncertainty explicit instead of guessing return the answer as structured JSON keep a constructive and concrete tone User Prompt # The user prompt contained:\nthe rubric in structured form instructions on how the criteria should be applied the report text itself What worked well was that the rubric was included with every request, because the model then had the criteria explicitly available instead of relying on an implicit understanding.\nEndpoint Design # We built a small backend with one main endpoint:\nPOST /api/evaluations Here, the client sends the report text, and the backend:\nbuilds the system prompt and user prompt sends the request to the OpenAI Responses API receives a structured JSON response returns the result to the frontend or client We also added:\nGET /health GET /api/rubric This made the solution more transparent and easier to test.\nExample Request and Response # Request # { \u0026#34;reportText\u0026#34;: \u0026#34;insert report text here\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;gpt-4.1-mini\u0026#34; } Response # { \u0026#34;rubricTitle\u0026#34;: \u0026#34;Internship report evaluation rubric\u0026#34;, \u0026#34;model\u0026#34;: \u0026#34;gpt-4.1-mini\u0026#34;, \u0026#34;generatedAt\u0026#34;: \u0026#34;2026-04-27T12:00:00.000Z\u0026#34;, \u0026#34;weightedScore\u0026#34;: 3.8, \u0026#34;evaluation\u0026#34;: { \u0026#34;overallLevel\u0026#34;: \u0026#34;medium\u0026#34;, \u0026#34;overallSummary\u0026#34;: \u0026#34;The report is generally well written and concrete, but some parts could be more strongly linked to the learning objectives.\u0026#34;, \u0026#34;criteria\u0026#34;: [ { \u0026#34;id\u0026#34;: \u0026#34;learning_goals_and_theory\u0026#34;, \u0026#34;title\u0026#34;: \u0026#34;Learning goals and theory-practice link\u0026#34;, \u0026#34;level\u0026#34;: \u0026#34;medium\u0026#34;, \u0026#34;score\u0026#34;: 4, \u0026#34;justification\u0026#34;: \u0026#34;The report connects several experiences to the education, but not all learning objectives are covered equally clearly.\u0026#34;, \u0026#34;evidence\u0026#34;: [ \u0026#34;Describes the use of agile working methods\u0026#34;, \u0026#34;Connects tasks to theories from the education\u0026#34; ], \u0026#34;improvementSuggestion\u0026#34;: \u0026#34;Make it clearer how each learning objective was specifically fulfilled.\u0026#34; } ], \u0026#34;strengths\u0026#34;: [ \u0026#34;Concrete descriptions of tasks\u0026#34;, \u0026#34;Good technical insight\u0026#34;, \u0026#34;Clear personal reflection\u0026#34; ], \u0026#34;weaknesses\u0026#34;: [ \u0026#34;Uneven coverage of learning objectives\u0026#34;, \u0026#34;Some judgments require more explicit documentation\u0026#34; ], \u0026#34;improvementSuggestions\u0026#34;: [ \u0026#34;Structure the report more clearly around the learning objectives\u0026#34;, \u0026#34;Add more concrete examples of value creation\u0026#34; ], \u0026#34;dialogQuestions\u0026#34;: [ \u0026#34;Which learning objective do you think you fulfilled best?\u0026#34;, \u0026#34;Where did you experience the greatest professional development?\u0026#34; ], \u0026#34;uncertainties\u0026#34;: [ \u0026#34;It is unclear whether all learning objectives are explicitly covered\u0026#34; ], \u0026#34;disclaimer\u0026#34;: \u0026#34;This is a guidance-oriented AI-based assessment and not a final grading.\u0026#34; } } Frontend # We also chose to build a simple frontend in React. Here, the user can:\npaste report text directly into a text field upload a markdown or text file send the text to the API view the structured assessment in a more readable layout This made the project more complete, because it clearly showed the full flow from input to AI-generated feedback.\nWhat Worked Well # What worked best was the combination of a clear rubric and structured output. When the criteria were precise, the responses also became more useful. It especially helped that the model was asked to return feedback per criterion instead of only one large block of text.\nWe also found value in asking the model to highlight:\nstrengths weaknesses improvement suggestions questions for further dialogue That made the output more useful in an educational context.\nWhat Worked Less Well # The biggest challenge was that quality still depends heavily on how clearly the report is written. If a student only shows a learning objective implicitly, the model may overlook it or be uncertain about it. That means the AI assessment is not necessarily \u0026ldquo;wrong,\u0026rdquo; but it can become uneven if the input is unclear.\nAnother challenge was that the model can sometimes sound more confident than it should. Because of that, it was important for us to force uncertainty into the output format and continuously emphasize that the result is guidance-oriented.\nReflection on Using a Coding Agent # We used a coding agent during the development process, and it is important to state that openly. It helped us set up the backend, frontend, and file structure quickly, and it made it easier to iterate on the implementation.\nAt the same time, the project also showed the limitation of such a tool: the agent can help write code, but it cannot decide what makes a good rubric, what is pedagogically responsible, or how an AI assessment should best be presented in an educational context. Those choices still required human judgment.\nFor us, the coding agent became mainly a productivity tool, not a replacement for design decisions or reflection.\nWhat We Would Improve in the Next Version # If we were to continue developing the solution, we would like to:\nadd the ability to choose between multiple rubrics improve error messages for API timeouts or invalid API keys display the rubric directly in the frontend store previous assessments compare output from multiple models or prompts make the assessment more traceable by showing quotes or text excerpts from the report behind each judgment We could also imagine a version where the teacher can adjust criteria and weights without changing the code.\nConclusion # The project showed that it is relatively easy to get a language model to return an answer, but much harder to design a solution where the answer is actually useful. The most important part of the work was therefore not the API call itself, but the translation from assessment material into rubric, prompt, and structured feedback.\nThe result was a small, functional prototype that demonstrates how an LLM can be used as part of a larger data flow inside an application. At the same time, it became clear that AI in this context works best as support for reflection and dialogue, not as an automatic grader.\n","date":"27 April 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/ai-assessment-of-internship-reports-with-an-llm-api/","section":"Blog","summary":"How we turned assessment material into a rubric, prompts, and structured AI feedback for internship reports","title":"AI Assessment of Internship Reports with an LLM API","type":"blog"},{"content":"","date":"27 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/education/","section":"Tags","summary":"","title":"Education","type":"tags"},{"content":"","date":"27 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/llm/","section":"Tags","summary":"","title":"LLM","type":"tags"},{"content":"","date":"27 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/openai-api/","section":"Tags","summary":"","title":"OpenAI API","type":"tags"},{"content":"","date":"27 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/react/","section":"Tags","summary":"","title":"React","type":"tags"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/chatbot/","section":"Tags","summary":"","title":"Chatbot","type":"tags"},{"content":" Introduction # As part of my work with AI applications, I used an AI code agent to help create a React quiz website about meditation. The goal of the site was to let users answer questions from a set of meditation worksheets. The quiz does not require login, but it still saves the user’s answers locally in the browser so they can return later and review what they answered.\nFor this project, I used Codex, an AI coding agent, to support the development process. Instead of only asking for general advice, I used the agent as a practical development partner that could inspect the project folder, create files, write React code, install dependencies, test the build, and help solve issues during the workflow.\nWorkflow # The workflow started with sharing images of the meditation questions. Each question had answer options, and the correct answers were marked with a red X. The first step was therefore not just coding, but also interpreting the content from the images and turning it into structured quiz data.\nAfter that, Codex checked the project folder and found that it was empty. Based on that, it created a new React project using Vite. The site was built as a client-side React application, with all quiz questions stored directly in the React code. Each question included the question text, answer options, and the correct answer.\nThe main features created were:\nA quiz flow where users can answer one question at a time Navigation between questions and sections Immediate feedback after answering A review page where users can see their saved answers A score overview showing how many answers are correct Local storage so answers are saved without needing an account A reset option to clear answers and start again After the first version was created, Codex also helped test the project. There were some issues with running npm in PowerShell because scripts were disabled on the system. Codex solved this by using npm.cmd instead. There was also a Vite build issue caused by a Windows permission problem, which was fixed by running the build with the needed permissions.\nFinally, Codex started a local development server so the site could be tested in the browser.\nWhat I Learned # This project taught me a lot about how AI code agents can be used in a realistic development workflow. Some of the main things I learned were:\nHow an AI code agent can help turn an idea into a working React application How image-based content can be transformed into structured data for a quiz How local storage can be used to save user progress without a backend How React components can be organized around quiz logic, review screens, and user interaction How AI can help with both writing code and solving development environment problems How important it is to still review the result, because the AI may need corrections or clarification One of the most useful parts of using Codex was that it did not only generate code in isolation. It worked directly inside the project folder, created the needed files, installed packages, ran the build, and checked whether the local server responded correctly. This made the workflow feel closer to working with a development assistant than just using a chatbot.\nChallenges I Faced # Even though using Codex made the development process faster, there were still some challenges.\nThe first challenge was reading the questions from the images. Some parts of the photos were slightly blurry or difficult to read, so there was a risk that small text details could be interpreted incorrectly. This means that even when AI helps with transcription, the final content still needs to be checked manually.\nAnother challenge was making sure the correct answers were transferred properly. Since the red X marks represented the correct answers, the quiz logic depended on interpreting those markings accurately. A small mistake in this step could make the quiz give wrong feedback to users.\nThere were also technical challenges. PowerShell blocked the normal npm command because of script execution restrictions. This is a common issue on Windows, and it showed that AI-assisted development still requires understanding the local environment. The Vite build also needed permission handling before it worked correctly.\nA larger challenge is that using an AI code agent can make development feel very fast, but it can also hide some of the complexity. It is important not to just accept the generated code blindly. I still need to understand how the React components work, how the state is saved, and how the application could be improved later.\nWhat Could Be Better # The site works as a basic quiz application, but there are several things that could be improved.\nFirst, the questions could be moved out of the main React file and into a separate data file. This would make the project easier to maintain, especially if more questions are added later.\nSecond, the site could include a better final results screen. Right now, users can review their answers, but a dedicated completion page could give a clearer summary when the quiz is finished.\nThird, the design could be improved further with more animations, better mobile refinements, and maybe a calmer visual style that fits the meditation theme even more.\nAnother improvement would be adding the option to export results or save multiple quiz attempts. Currently, the site saves only the latest answers locally. If users wanted to track progress over time, the app would need a more advanced storage structure.\nFinally, the image-to-question workflow could be improved by first writing all questions into a separate document and checking them carefully before coding. That would reduce the risk of small transcription mistakes.\nWhy This Project Matters # This project was valuable because it combined React development with AI-assisted coding. Instead of only learning about AI tools in theory, I used one to build something practical.\nIt also showed that AI code agents can be useful for more than just generating snippets. Codex helped with project setup, implementation, debugging, testing, and running the application locally. This made the development process faster and more interactive.\nAt the same time, the project also showed that AI does not remove the need for human judgment. I still had to define the goal, provide the content, check the result, and think about what would make the site useful for real users.\nConclusion # Overall, using Codex to build a React meditation quiz site was a useful learning experience. It helped me move from an idea and a set of photographed questions to a working web application with saved answers and a review function.\nThe workflow showed me how powerful AI code agents can be when they are used as development partners. They can speed up repetitive tasks, help solve errors, and create a strong first version of an application. However, the developer still needs to guide the process, check the content, and understand the code.\nThis project gave me more confidence in using AI as part of a real development workflow. It also helped me see both the strengths and limitations of AI-assisted coding, which is important when building more advanced applications in the future.\n","date":"20 April 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/chatting-with-coding-agent-workflow/","section":"Blog","summary":"I do a programming exercise with a chosen coding agent. This is the documented workflow","title":"Chatting wth coding agent - Workflow","type":"blog"},{"content":"","date":"20 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/exercise/","section":"Tags","summary":"","title":"Exercise","type":"tags"},{"content":" About Me # I am a student in the Datamatiker programme at Erhvervsakademi Kobenhavn, an Academy Profession Degree in Computer Science.\nI started the programme in 2024 with no prior programming experience. Since then, I have been building a solid foundation in backend development and gradually moving toward more complex topics like architecture, testing, and deployment.\nThis portfolio documents that journey, not just the finished projects, but also the decisions, challenges, mistakes, and lessons that have shaped my development along the way.\nAbout This Site # This site serves two purposes. It is partly a requirement of the programme to maintain a portfolio with weekly devlogs throughout the semester. Beyond that, it is also a space I want to keep building on, a place to document projects, share technical reflections, and track my progress over time.\nThe content here is mainly technical, but I try to write in a way that is honest about the learning process rather than only showing polished outcomes.\nPersonal # Outside of programming, I enjoy a few things that help me recharge and stay curious. I am a movie buff and love watching films, whether it is older classics or something new. I also do bouldering, which I enjoy because it combines focus, patience, and problem-solving, a lot like programming, just on a wall.\n","date":"17 April 2026","externalUrl":null,"permalink":"/PortfolioSite/about/","section":"Emil Dagsberg","summary":"A portfolio built to document the jump from beginner to builder, with notes on projects, lessons learned, and the process behind the work.","title":"About","type":"page"},{"content":" Introduction # As part of my work with AI applications, I decided to build a chatbot for my portfolio website using Dify. The idea was to create a more interactive experience for visitors, allowing them to learn about my projects, skills, and background through conversation instead of only reading static content.\nThis project gave me hands-on experience with designing, testing, and deploying an AI-powered feature in a real-world setting.\nWhat I Learned # Building this chatbot taught me a lot about both the technical and practical sides of AI integration. Some of the main things I learned were:\nHow to use Dify to create and configure a chatbot without building everything from scratch How to structure prompts so the chatbot gives more useful and relevant answers How to connect AI tools to a website in a way that feels natural for users The importance of testing responses to improve accuracy and user experience How AI can add value to a personal portfolio by making it more engaging and dynamic Challenges I Faced # Even though Dify made the development process easier, there were still several challenges:\nMaking sure the chatbot stayed focused on my portfolio content instead of giving vague or unrelated answers Writing prompts and instructions that helped the chatbot respond clearly and consistently Balancing usefulness with simplicity, so the chatbot felt helpful without overcomplicating the website Handling limitations in AI responses, especially when the chatbot did not fully understand certain questions Thinking about how users might interact with the chatbot in unexpected ways Why This Project Matters # This project was valuable because it combined web development, AI integration, and user experience design. Instead of only learning theory, I was able to build something practical that I can include in my portfolio and continue improving over time.\nIt also showed me that building AI applications is not just about using a model. It requires planning, testing, and thinking carefully about how people will actually use the tool.\nConclusion # Overall, building a chatbot for my portfolio website with Dify was a useful learning experience. It helped me better understand how AI can be integrated into real applications, what challenges come up during development, and how important it is to design with the user in mind.\nThis project is a step toward creating more advanced AI-powered applications in the future, and it has given me more confidence in applying AI in practical and meaningful ways.\n","date":"17 April 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/building-my-rag-chatbot/","section":"Blog","summary":"A reflection on the challenges and lessons learned from creating a Dify-powered chatbot for my portfolio website","title":"Building a Chatbot for My Portfolio Website with Dify","type":"blog"},{"content":"","date":"17 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/dify/","section":"Tags","summary":"","title":"Dify","type":"tags"},{"content":"","date":"17 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/portfolio/","section":"Tags","summary":"","title":"Portfolio","type":"tags"},{"content":"","date":"10 April 2026","externalUrl":null,"permalink":"/PortfolioSite/tags/estimation/","section":"Tags","summary":"","title":"Estimation","type":"tags"},{"content":" Introduction # In this course, I expect to gain practical experience with building and deploying AI-powered applications. My goal is to understand not just the theory, but also how AI can be applied in real-world scenarios.\nWhat I Hope to Learn # How to integrate AI into applications Working with APIs and models Deploying AI solutions using tools like Docker Understanding limitations and ethical considerations Challenges I Expect # AI development can be complex, especially when dealing with:\nData quality and preprocessing Model performance and tuning Deployment and scalability Conclusion # Overall, I’m excited to explore the possibilities of AI and develop projects that demonstrate both technical skills and practical understanding.\n","date":"10 April 2026","externalUrl":null,"permalink":"/PortfolioSite/blog/expectation-for-ai-applications/","section":"Blog","summary":"My expectations and goals for the AI Applications course","title":"Expectations for AI Applications","type":"blog"},{"content":"","externalUrl":null,"permalink":"/PortfolioSite/projects/","section":"Projects","summary":"","title":"Projects","type":"projects"},{"content":"","externalUrl":null,"permalink":"/PortfolioSite/series/","section":"Series","summary":"","title":"Series","type":"series"}]