Available MSc topics
Must read: brief instructions and rules
For all students who are interested in the graduate theses done so far at CRTA, they can be viewed here . All graduation theses made in the old laboratory were defended before 2021. All works from 2021 and 2022 were made and implemented in CRTA. Students who choose a thesis topic are expected to actively engage and work responsibly on the given topic. All students who choose a graduate thesis on one of the topics offered below will be provided with all the necessary equipment, as well as workstations and a (shared) computer in the laboratory and/or practicals. If the graduate work includes an experimental part, students will be able to work in the associated laboratory where experiments will be performed: Laboratory for Autonomous Systems, Laboratory for Medical Robotics or Laboratory for Computational Intelligence. In addition to working in the laboratories, all students are also available for two practicums, the schedule of which can be seen here . When working in laboratories and practicums, students must adhere to all rules of conduct and the rules for using computer, laboratory and other equipment. After work, workplaces in CRTA must always be left clean and tidy.
In addition to laboratories and laboratory equipment, in the student section of CRTA, students have their 3D printers, various student tools, and equipment that are necessary for a large number of topics that include practical experimental work.
Researcher Luka Rabuzin is responsible for working with students and other tools, and he will provide you with all the necessary guidelines when you start working on your topic.
For any general questions and experiences, you can always reach out to our current students, graduates, or demonstrators.
What is a "Project" and how is it related to the master's thesis?
The project in the 9th semester of studies, which precedes the enrollment of the master's thesis, is a necessary prerequisite for applying for the master's thesis topic, especially if the mentor or comentor is a professor from the CRTA. The project topic is closely related to the master thesis topic and forms a cohesive unit with the thesis. When registering the project in the Studomat, it is necessary to agree with the future mentor or comentor on the scope of the project. The project typically involves solving specific parts of the master's thesis topic (addressing certain aspects of the described topics). The project is submitted to the mentor (and comentor) in digital format.
Writing and submission of the MSc thesis
The master's thesis should be written in accordance with the official guidelines and template for the master's thesis, which can be found here. Before writing the thesis, it is necessary to thoroughly study all the materials and contact mentor or comentor for any questions.
Before starting to write the thesis, it is suggested to discuss the structure of the thesis with the mentor or comentor. Considering the chosen submission deadline for the master's thesis, the complete written thesis should be submitted to the mentor or comentor for review (Word and PDF formats) via email at least 10 days before the official submission deadline. The thesis sent for review must be complete free of spelling or grammar errors (make sure to perform a spell check using correct me).
List of available topics
- Development of an end-effector for physical human-robot interaction and physiotherapy
- Robotic medical drill control and cranial bone drilling experiments (for more information, contact Assoc. Prof. Marko Švaka, Ph.D.)
- Research and development of a robotic-assisted system for surgeon navigation during knee surgery (for more information, please contact Assoc. Prof. Marko Švaka, Ph.D.)
- Management of a fleet of mobile robots in the ROS2 environment
- Development of an interactive setup for the game of Tic-Tac-Toe
- Autonomous charging of electric vehicles using a robotic arm
- Dual-handed assembly of a fuse housing
- Robotic handling of objects in an unstructured state
- Performing advanced missions using the KUKA KMR iiwa robot and the Robot Operating System (ROS2) – busy
List of topics 2023/2024. - Assoc. Ph.D. Filip Šuligoj
- —(reserved)—Extrinsic calibration of a robotic in-hand stereovision system using neural networks
- Estimation of Patient Head Position in 3D Computed Tomography Images with Application in Robotic Neurosurgery
- —(reserved)—Force control using a robotic system for interacting with curved surfaces
- —(reserved)—Integration of machine vision and learning methods for automated detection, localization and verification of microprocessor chips
- —(reserved)—Robotic system for autonomous navigation and manipulation of objects
- Automation of extrinsic calibration of an “Eye-in-Hand” robotic system
- Development and implementation of a packaging control and monitoring system with integration of weighing scales and a vision system
If you are interested in an area or topic that is not suggested, feel free to suggest your own topics, ideas and projects to one of the CRTA employees, and then you can discuss your topic proposal with a potential mentor and/or comentor and collaborators on the topic. For any other questions, feel free to email the teacher responsible for a specific topic or visit them during consultation hours.
Detailed description of available topics
Extrinsic calibration of a robotic in-hand stereovision system using neural networks
This paper explores the use of neural networks for extrinsic calibration of a robotic in-hand stereovision system, essential for precise localization in space. These systems find applications in industry, medicine, and scientific research. Conventional calibration methods are often suboptimal due to challenges such as noise and optical nonlinearities. The paper proposes a non-parametric calibration based on neural networks, which increases the robustness and flexibility of the system, and enables automation of the process, thereby improving the efficiency and speed of calibration.
The work uses the existing stereo vision system, with macro lenses and an automatic algorithm for precise localization of retroreflective spheres. The stereo vision system should, along with the incremental movements of the robot arm to the given positions, store the 3D positions of the robot and the pixel coordinates of the centers of the localized sphere in a structured way, which together form a calibration training set for the neural network.
The research includes the following steps:
- Program the robot and communication with the stereo-vision system for the procedure of creating a calibration data set
- Suggest the size and configuration of the set for training the neural network (reference retro-reflective spheres are used for localization)
- To investigate and implement neural network models that can efficiently connect the coordinates of the centers of spheres in stereo-vision system images with known 3D robot positions.
- Evaluate the impact of different neural network models and parameters on the accuracy and robustness of calibration
- Validate the accuracy of extrinsic calibration using a neural network using positions that were not part of the training set and calculating the Euclidean error (known and expected).
The paper should include a review of relevant literature and a detailed description of the methods and algorithms used, as well as an evaluation of the results obtained in the context of applicability in real applications. It is also necessary to list the literature used and any assistance received from a mentor or collaborator.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Estimation of Patient Head Position in 3D Computed Tomography Images with Application in Robotic Neurosurgery
The paper focuses on robust estimation of patient head position in 3D computed tomography (CT) images, which is of crucial importance in the context of increasing automation and robotic applications in neurosurgery. Stereotactic brain surgeries, which are increasingly performed with the help of robots, require high precision in localization of tumors and other pathological conditions. Head position estimation is therefore becoming a key element for the success of such surgical procedures. Although head position information is often available as metadata in DICOM format, its confirmed accuracy is imperative due to its clinical impact.
The research methodology includes:
- Development and implementation of algorithms for precise head position estimation based on geometric analysis of facial features and anthropomorphic landmarks (e.g. nose, eyes).
- Application and evaluation of various methods, including eigenvalue analysis and filtration of biomedical images based on Hounsfield values for tissues of different densities.
- Comparison and analysis of the accuracy and robustness of different approaches, with a special focus on their applicability in robotic neurosurgery.
- Validation of methods using independent, real (anonymized) CT scans of the human head.
The paper will include a comprehensive review of relevant literature, a detailed description of the methods and algorithms used, and an evaluation of the results obtained with regard to their clinical applicability. The paper must list the literature used and any assistance received from a mentor or collaborator.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Force control using a robotic system for interaction with curved surfaces
The functionality of force control is becoming more and more important in modern robotics, where robots are expected not only to have the ability to visually perceive the environment, but also to interact with it in an advanced manner. This includes the robot's ability to interpret and react to various physical parameters, such as force and torque, in real time. This kind of interaction enables robots to adapt adequately to complex and dynamic environments, which is crucial for future applications in industry, healthcare and other sectors.
For this purpose, the paper proposes to use ROS2 (Robot Operating System 2), which enables the integration and control of various equipment and their functionality. The Franka Panda robot in combination with the Realsense D435 depth camera is used as a hardware component.
Research tasks include:
1. Designing a customizable robotic tool (handheld depth camera and spherical end tool) that can be mounted on an existing jaw and curved workpiece.
2. Use of ROS2 for the integration of functionality and control of the robot and the acquisition of point cloud data.
3. Implementation of force control to maintain constant contact with the curved surface.
4. Setting up a scenario in which a robot moves a tool to cover a linear path, planned based on a point cloud obtained by a depth camera, over a curved surface while maintaining a constant force.
5. Analysis of the results, especially the accuracy in maintaining a constant force and the planned trajectory. The paper will contain a detailed review of the literature relevant to force control and visualization in robotics. It will also describe the methods and algorithms used to implement the control schemes and acquire point clouds. An evaluation will be conducted to determine the accuracy and robustness of the implemented force control on curved surfaces. The paper should list the literature used and any assistance received from a mentor or collaborator.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Integration of machine vision and learning methods for automated detection, localization and verification of microprocessor chips
In the context of rapid technological development and mass production of sophisticated electronic components, quality control remains challenging and economically demanding. In particular, machine vision, which uses camera technology to collect information, has become a crucial tool in industrial control processes. In this work, we integrate a combination of machine vision and learning, specifically neural networks, for quality control of microprocessor chips.
Specific research tasks include:
- Acquisition of images of the aforementioned microprocessor boards under different lighting conditions and orientations, using an industrial camera.
- Annotation and creation of datasets for neural network training.
- Application of YOLO (You Only Look Once) neural network and machine vision algorithms for detection, localization and verification of elements on different microprocessor boards such as Arduino UNO, Jetson NX Xavier, Raspberry Pi, STM32 and UP board.
- Analysis and evaluation of model performance under different conditions, focusing on model robustness in the context of variations in orientation, object damage, and lighting conditions.
The paper must list the literature used and any assistance received from a mentor or collaborator.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Robotic system for autonomous navigation and object manipulation
In the context of the development of autonomous robotic systems for industrial and logistics applications, this paper focuses on the integration of an autonomous mobile platform with a robotic arm. Specific hardware includes the Waypoint Vector mobile robotic platform, a Frank Emik Panda robotic arm with controller, a UPS power supply system, and a computer. This configuration provides new possibilities for automating various tasks, which require navigation and manipulation of objects.
The specific tasks of the research are as follows:
- Design, assembly and connection of all physical system components, including mobile platform, robotic arm, UPS and computer.
- Configuration of the ROS environment and its integration with all physical system components.
- Demonstration of the mobile platform's ability to move autonomously to multiple physical locations within the Laboratory (in the CRTA area).
- Demonstration of the execution of the palletizing task to be performed by the robotic arm at selected locations.
- Analysis and evaluation of system performance to confirm the robustness and efficiency of the proposed implementation.
In the paper, it is necessary to cite relevant literature and methods, and mention possible help received from mentors or associates.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Automation of extrinsic calibration of an “Eye-in-Hand” robotic system
In light of the ubiquitous application of robotic systems in industry and research, this paper focuses on the development and implementation of an automatic system for calibrating the spatial relationship between a robotic arm and an embedded 3D vision system. More specifically, the goal is to calculate the transformation matrix between the robot flange and the 3D camera coordinate system, known as extrinsic calibration in the “eye-in-hand” configuration.
Specific tasks of the thesis include:
- Review and implementation of calibration methods: study and analysis of existing methods for extrinsic calibration, with implementation of the chosen method.
- Design and manufacture of a 3d camera mount: design and manufacture of a mount that will allow the 3d camera to be mounted on the flange of the robotic arm.
- Configuration of the operating and development environment: installation and setup of the necessary software environment, including the operating system and development tools, for effective system communication and control.
- Establishment of communication protocol: development and testing of communication protocol between robot, computer and 3d vision system.
- Creation of the calibration procedure and program: development of a software solution that, in combination with the calibration object, automates the process of 3D camera calibration.
- Evaluation of calibration accuracy: conducting experimental measurements and analysis to determine the accuracy and robustness of the implemented calibration process.
In the paper, it is necessary to cite relevant literature and methods, and mention possible help received from mentors or associates.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Development and implementation of a packaging control and monitoring system with integration of weighing scales and a vision system
This thesis aims to develop an Autonomous System for Control and Monitoring (ASCP), which efficiently combines scales and vision systems for precise detection and measurement of objects in the framework of "Pick and Pack" operations. The system will use depth (stereo) cameras for visual detection of objects and sensors for mass measurement. The focus will be on the development of programs in C++ for object detection using the YOLO algorithm, and on integration with weighing logic.
Specific Tasks
User Interface Design: Creating an intuitive user interface to interact with the vision system and scales.
Training the YOLO model for detection: Collection and annotation of data for the training of the YOLO model, using stereo cameras to obtain depth information.
Implementation of the YOLO model in C++: Inclusion of the trained YOLO model in a program framework developed in C++ for real-time object detection.
Weighing logic integration: Development of an algorithm that will link the information obtained from the scales with the detected objects, and check whether the object's mass and identity are consistent.
Testing and evaluation: Conducting system testing under different conditions and analyzing the results to verify the accuracy and robustness of the implemented algorithms.
Methodology
The development will be carried out using the C++ programming language and relevant libraries for image and sensor data processing. An annotated data set will be used for training the YOLO model, while real objects and conditions will be used for testing and evaluation.
Literature and Cooperation
It is necessary to cite relevant literature and methods in the paper. Possible help or cooperation received from mentors or collaborators should also be mentioned.
For more details on this topic, please contact dr. sc. Filip Šuligoj
Development of an end-effector for physical human-robot interaction and physiotherapy
Language of the Master's thesis: English
Mentor: Asst. Ph.D. Marko Švaco
Commenter: Asst. Ph.D. Tadej Petrič – homepage
Musculoskeletal disorders (MSDs) are referred to as the pandemic of the modern world. They account for the majority of all recognized diseases in the European Union and cause millions of lost working days each year. MSDs are soft tissue injuries caused by sudden impact, force, vibration, and unbalanced positions. The treatment of MSDs has been summarized in several clinical practice guidelines.
In the scope of this thesis, a detailed state-of-the-art analysis of active projects and research in the field of robotic physiotherapy needs to be done. All types of physiotherapy should be investigated such as physical contact, massage, ultrasound, heat, etc.
In the scope of the thesis, a prototype of a robotic end-effector based on the human hand should be researched, developed, and tested in the Laboratory for medical robotics at CRTA on a robot arm with position and impedance control.
This task details investigation into biomechanics and the anatomy of a human hand (palm, fingers, thumb, fist) used in physiotherapy. The developed end-effector of the collaborative robot is intended to reproduce therapeutic movements and apply forces on a human subject in a laboratory mockup scenario. Important mechanical (stiffness, hardness, elasticity, etc.) and physical properties (induced pressure, temperature, friction, etc.) should be measured with the purpose of developing a highly effective end-effector.
For more details on this topic, please contact doc. dr. sc. Marko Švaco.
Dual-handed assembly of a fuse housing
With the increasing application of dual-arm industrial robots, the possibilities are significantly expanded compared to single-arm robotic workstations. Within the laboratory for artificial intelligence, there is a dual-arm robotic system equipped with 15 degrees of freedom, two 2D industrial cameras, tool changers, grippers, and a worktable with an industrial product - a fuse housing. In order to achieve complete automation and robotization of the fuse housing assembly process, with the existing dual-arm Yaskawa CSDA10F is is necessary:
- to reshape and enhance the machine vision system (hardware and software) to make it robust and functional,
- to reshape and enhance the robotic tools, tool holders, magazines, pallets, fixtures, and delivery paths used for the preparation and positioning of the components in the fuse housing assembly,
- to develop an algorithm for learning the desired arrangement of fuses and relays based on 2D perception and image processing,
- to program the process of autonomous assembly of fuse casings according to the learned schedule from the previous step,
- to create a simple graphical user interface (GUI) for controlling a robotic station,
- to develop and implement an algorithm for quality control (inspection) of the assembled fuse box enclosure.
The thesis must be validate on the equipment in the Laboratory for Artificial Intelligence. For the developed application it is necessary to design and manufacture all the required structural, mechatronic, and other elements/components. The demonstration on the laboratory equipment should be enabled in an automatic mode of operation through a user interface.
For more details on this topic, please contact doc. dr. sc. Marko Švaco and dr.sc. Josip Vidaković.
Robotic handling of objects in an unstructured state
Industrial robots are increasingly being used in unstructured work environment where the goal is to manipulate objects with all six degrees of freedom (three translations and three rotations) unknown. In the Laboratory for Autonomous Systems at CRTA, the problem of extracting parts from a box using a stationary industrial 3D vision system needs to be solved on the existing experimental setup. As a preliminary research step, it is necessary to study previously conducted student works on similar topics. In this thesis, it is necessary:
- to develop the necessary constructon and programming solutions for automatic tool changing on a robot,
- to create a tool for calibrating the vision system and robotic arm
- to select at least nine workpieces of different shapes (rectangular, cylindrical, disc-shaped, flat, etc.) and different dimensions,
- for selected subjects it is necessary to examine, implement and describe all available functions for 3D detection and localization.
The thesis must be validate on the equipment in the Laboratory for Autonomous Systems. For the developed application it is necessary to design and manufacture all the required structural, mechatronic, and other elements/components using the available equipment it the laboratory. The demonstration on the laboratory equipment should be enabled in an automatic mode of operation through an arbitrary user interface.
For more details on this topic, please contact doc. dr. sc. Marko Švaco.
Executing advanced missions using the KUKA KMR iiwa robot and the Robot Operating System (ROS2)
Mobile robot KUKA KMR iiwa has the possibility of programming and implementation using the KUKA Sunrise environment. The Sunrise environment requires robot programming in the JAVA programming language, which may not be practical for robotics engineers compared to Python or C++. Therefore, at the Norwegian University has been developed an interface that enables control of the mobile robot and reading its sensors using the ROS2 environment. In addition to easier programming in the ROS2 environment, one of the features of ROS2 is that it allows the application of other mapping, localization, and navigation algorithms, not just those provided by KUKA. The KUKA KMR mobile robot also includes the KUKA iiwa industrial collaborative robot, which can also be implemented in the ROS2 environment along with the MoveIt package, offering exceptional flexibility in working with the robot. In this thesis, it is necessary:
- to investigate and implement communication from ROS2 to the KUKA KMR robot (hardware interface)
- to investigate and implement communication from ROS2 to the KUKA iiwa robot using the MoveIt package
- to select the most appropriate algorithms for simultaneous localization and mapping and implement them on the robot
- to select the most suitable algorithm for autonomous navigation of the robot in space
- to define and execute the task of object retrieval, object manipulation, and object placement at a predefined location.
For more details on this topic, please contact doc. dr. sc. Marko Švaco and Ph.D Branimir Ćaran.
Development of an interactive setup for the game of Tic-Tac-Toe
Ambient and motor intelligence enable people to navigate and adapt to many new situations. One of the areas in which the perception of the environment and human intelligence come to the fore are various games. One of the relatively simple games is the Tic-Tac-Toe game. To enable a robotic system to play the game of Tic-Tac-Toe against a human opponent, it requires the integration of various sensory and motor capabilities into the robotic system. Perception of the game board is a challenging task as robust perception is influenced by several variable parameters such as lighting direction and intensity, color, thickness, and dimensions of the "X" and "O" markers on the game board. Furthermore, motion planning for the robotic arm is not a trivial task as it requires avoiding collisions with the environment and planning movements that avoid singularities, large decelerations, and velocities of individual robot joints or the tool tip. On the existing Tic-Tac-Toe setup in the Laboratory for Autonomous Systems, it is necessary:
- to analyze the robot's workspace with the aim of increasing the effective playing area,
- to analyze and propose a new arrangement of the vision system (one or more cameras) for robust perception of the playing area on the game board,
- to develop and implement a computer vision algorithm for recognizing the planar position of the game board and the game symbols "X" and "O",
- to create a graphical user interface for interacting with the player and running the entire application,
- to analyze and verify each robotic motion before execution, using simulation software packages such as RoboDK,
- to make the necessary structural, control, and other modifications to the experimental setup,
- to create a procedure for automated calibration of vision systems and robots.
Master's thesis must be done on the existing setup with the UR5 robot in the Laboratory for Autonomous Systems in CRTA.
For more details on this topic, please contact doc. dr. sc. Marko Švaco and dr.sc. Filip Šuligoj.
Grading criteria for graduate theses
The graduate theses must be written according to the official guidelines of the FMENA. The grade of the mentor and comentor is formed by adherence to formal regulations and directions, but more importantly by work on the undergraduate thesis, independence, and originality. In addition to the individual grade of the graduate thesis, a grade is also awarded during the presentation in front of the committee.
We invite all students to read them all rules and instructions related to the preparation of diploma theses. For instructions on creating a thesis presentation, contact your mentor directly.
The project was co-financed by the European Union from the European Regional Development Fund
The website was co-financed by the European Union from the European Regional Development Fund.
The content of the website is the sole responsibility of the Faculty of Mechanical Engineering and Naval Architecture.