Anvendt matematikk og og mekanikk
http://hdl.handle.net/10852/9
Tue, 26 Mar 2019 08:35:52 GMT2019-03-26T08:35:52ZEn analyse av etablerte bølge-strøm-modeller for anvendelse av kajakkpadlere i Lofoten
http://hdl.handle.net/10852/66382
En analyse av etablerte bølge-strøm-modeller for anvendelse av kajakkpadlere i Lofoten
Renså, Johannes Arnevik
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/663822018-01-01T00:00:00ZInvestigating the Advantages of Having a Tensor Product Finite Element Software
http://hdl.handle.net/10852/65786
Investigating the Advantages of Having a Tensor Product Finite Element Software
Tharmanathan, Nithusha
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/657862018-01-01T00:00:00ZA posteriori error estimation for multiple-network poroelasticity
http://hdl.handle.net/10852/65762
A posteriori error estimation for multiple-network poroelasticity
Ødegaard, Emilie Eliseussen
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/657622018-01-01T00:00:00ZInvestigation of Nanoparticles in Zebrafish using Particle Tracking Velocimetry.
http://hdl.handle.net/10852/65706
Investigation of Nanoparticles in Zebrafish using Particle Tracking Velocimetry.
Farhad, Shako
Cancer is a major public health problem worldwide where it is the second leading cause of death. To improve safety and efficacy of anti-cancer drugs with less side effects, nanoparticles (NPs) have been studied heavily. The concept of selectively detecting and destroying cancer cells with NPs is very exciting, but from the past 10 years only 0.7\% (median) of the administered NP dose is found to be delivered to a solid tumour. To best be able to study the flow and distribution of fluorescent NPs in vivo, the zebrafish model has become very popular as it is optically transparent for easy imaging. Particle Image Velocimetry (PIV) is a non-intrusive optical measurement method, which gives velocity fields resolved in both time and space. Using PIV as the backbone a Particle Tracking Velocimetry (PTV) code was developed to track NPs varying in scale from 100 nm to 1200 nm. The mean and STD percentage of nanoparticle trajectories that are likely to end up in the vein margin within the next frame decreases with nanoparticle size (800 nm, 400 nm, 200 nm). Distribution of nanoparticle trajectories throughout the vein in relation to different nanoparticle sizes needs more data to determine anything conclusively.; Cancer is a major public health problem worldwide where it is the second leading cause of death. To improve safety and efficacy of anti-cancer drugs with less side effects, nanoparticles (NPs) have been studied heavily. The concept of selectively detecting and destroying cancer cells with NPs is very exciting, but from the past 10 years only 0.7\% (median) of the administered NP dose is found to be delivered to a solid tumour. To best be able to study the flow and distribution of fluorescent NPs in vivo, the zebrafish model has become very popular as it is optically transparent for easy imaging. Particle Image Velocimetry (PIV) is a non-intrusive optical measurement method, which gives velocity fields resolved in both time and space. Using PIV as the backbone a Particle Tracking Velocimetry (PTV) code was developed to track NPs varying in scale from 100 nm to 1200 nm. The mean and STD percentage of nanoparticle trajectories that are likely to end up in the vein margin within the next frame decreases with nanoparticle size (800 nm, 400 nm, 200 nm). Distribution of nanoparticle trajectories throughout the vein in relation to different nanoparticle sizes needs more data to determine anything conclusively.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/657062018-01-01T00:00:00ZRecommender systems analysis by compressed sensing
http://hdl.handle.net/10852/64111
Recommender systems analysis by compressed sensing
Munsterhjelm, Kristofer
Recommender systems are algorithms that suggest content or products to users on the internet. These are becoming ever more important due to the massive growth of content on popular web sites, yet their design is often only guided by empirical results. This has two drawbacks: mathematical analysis lags behind the use of the methods, and the methods may focus too much on immediate results instead of taking a wider perspective, leading to unintended consequences such as social media polarization. To help offset those drawbacks, this thesis considers both how one may analyze recommender systems more rigorously, as well as how they may be improved by optimizing not just for short-term results. The thesis approaches recommender systems from a compressed sensing perspective, starting with an explanation of compressed sensing as the study of how to approximate the cardinality minimization problem. It then proceeds to give a review of how compressed sensing can be generalized to approximate two matrix-valued problems, called matrix sensing and matrix completion. The application of matrix completion to the bilinear factorization model used in recommender systems follows, and we finish by investigating improvements to the basic bilinear factorization model, as well as suggesting other directions of improvement.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/641112018-01-01T00:00:00ZA posteriori modelling error estimation for linear elasticity and poroelasticity
http://hdl.handle.net/10852/63605
A posteriori modelling error estimation for linear elasticity and poroelasticity
Pysarieva, Valentyna
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/636052018-01-01T00:00:00ZAn Investigation of Ski-Snow Friction
http://hdl.handle.net/10852/63519
An Investigation of Ski-Snow Friction
Tjørstad, Rasmus Nes
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/635192018-01-01T00:00:00ZAn Algorithmic Differentiation Tool for FEniCS
http://hdl.handle.net/10852/63505
An Algorithmic Differentiation Tool for FEniCS
Mitusch, Sebastian Kenji
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/635052018-01-01T00:00:00ZUsing Personalized Virtual Hearts to Assess Arrhythmia Risk in Acute Infarction Patients
http://hdl.handle.net/10852/63482
Using Personalized Virtual Hearts to Assess Arrhythmia Risk in Acute Infarction Patients
Strøm, Vilde Nyrønning
Ventricular fibrillation (VF) occurs in ~10 % of myocardial infarction (MI) patients. These patients have higher in-hospital mortality and increased risk to lethal arrhythmias. The susceptibility for VF during acute MI and the underlying mechanisms remain incompletely understood. Utilizing patient-specific computational heart models to investigate the underlying causes could give a unique insight into VF during MI in a non-invasive manner. The objective of this study is to automate the process of generating patient-specific heart models and utilize them to identify risk factors for VF during first acute MI before and after primary percutaneous coronary intervention (PPCI). 38 patients (17 VF and 21 non-VF) underwent MRI scans five days and three months post-MI. Finite element models were constructed by segmenting MRI scans into healthy and infarcted tissue, modeled as a gradient of ischemia 15 minutes post-occlusion with decreased conduction and altered action potential morphology. Furthermore, all five-day models were paced from 17 sites in the left ventricle to simulate ectopic activity and assess arrhythmia inducibility. We successfully implemented an efficient semi-automated pipeline for constructing the finite element models, available open-source to the public. Furthermore, the pipeline was utilized to generate the most extensive collection of personalized ventricular models created so far. By analyzing the results, the five-day models had a significantly larger scar than the three-month models (8 % vs. 2,6 %, p-value less than 0,05). Through simulations, we found that all inducible sites had reentrant circuits initiated and persisted within the ischemic zone. The border zone was particularly susceptible to arrhythmias, where 44 % or border zone pacing sites resulted in reentry. Furthermore, the patients that were inducible had significantly larger infarcts (12,06 % vs. 1,96 %, p-value less than 0,05) than the non-inducible patients.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/634822018-01-01T00:00:00ZIterative Algorithms in Compressive Sensing
http://hdl.handle.net/10852/63480
Iterative Algorithms in Compressive Sensing
Høisæther, Kristoffer Ulvik
The field of compressive sensing is a modern field in applied mathematics which receives a lot of attention. In this thesis, we will give some insight into the iterative algorithms used in compressive sensing. We will study in particular the primal-dual algorithm, as proposed by Chambolle and Pock, and Nesterov's algorithm, NESTA. In general, the primal-dual algorithm is a more traditional algorithm than NESTA. Nesterov proved that for general convex functions, the primal-dual algorithm cannot achieve a better convergence rate than O(1/k), where k is the number of iterations, whereas Nesterov's algorithm with general convex functions achieves a convergence rate of O(1/(k^2)).
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/634802018-01-01T00:00:00ZExplicit Time Stepping Schemes for the Bidomain Model
http://hdl.handle.net/10852/63475
Explicit Time Stepping Schemes for the Bidomain Model
Bjørland, Christian
The bidomain and monodomain models describing cardiac electrophysiology are computationally demanding to solve. Employing efficient numerical methods is therefore important for the practical use of these models. In this thesis we have explored the efficiency of numerical methods based on finite elements in space and explicit and semi-implicit finite differences in time. The explicit scheme which solves the bidomain equations uses a fixed number of Jacobi-iterations to solve the stationary, elliptic part of the model. The results show that an explicit scheme based on Jacobi iterations can give comparable and, in some cases, better computational efficiency than semi-implicit schemes based on operator splitting.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/634752018-01-01T00:00:00ZA study of dead water resistance Reynolds Averaged Navier Stokes simulations of a barge moving in stratified waters
http://hdl.handle.net/10852/63458
A study of dead water resistance Reynolds Averaged Navier Stokes simulations of a barge moving in stratified waters
Killingstad, Peter Even
Computational fluid dynamics, with the software OpenFOAM, has ben used to investi- gate the dead water phenomenon. Investigation was done by simulating a barge moving in stratified waters. Comparison of the turbulence models k − epsilon and k − omega SST has been conducted, resulting in a better performance in estimating drag by the k − omega SST model. Simulations of the barge moving in stratified fluid is shown to generate internal gravity waves below and in the wake of the barge. The internal gravity waves cause the barge to experience an increase in drag for subcritical densimetric Froude numbers (Frh ). Max- imum drag is shown to appear in regions of 0.6 ≤ F r h ≤ 0.7. Internal gravity waves restrict the passage area causing an acceleration of the flow downstream of the barge, resulting in thinning of boundary layer and in drop of pressure.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/634582018-01-01T00:00:00ZThe dead water phenomenon. A computational fluid dynamics study
http://hdl.handle.net/10852/63453
The dead water phenomenon. A computational fluid dynamics study
Jacobsen, Karl Åge
Applying the computational fluid dynamics software openFoam, we study dead water resistance on a barge. Dead water resistance is the extra drag due to internal waves in the interface between salt water and fresh water. For Froude numbers below 1.5, we find an inverted U-shape between the Froude number and dead water resistance. Below and above the inverted U-shape range, the dead water resistance is small. The peak in dead water resistance is located in the range Fr 0.6-0.7. The size of dead water drag grows with the ratio of draft to pycnocline. For a draft to pycnocline equal to one, the drag increases by 21 percent due to the internal wave. The k-omega model does not give steady drag coefficients in our simulations. The k-omega SST model gives steady drag coefficients. The dead water drag is to a large degree dependent on the internal wave surface elevation below the stern of the barge. Pressure drag is the main driver behind the dead water drag. The skin friction only contributes positively to the dead water drag for the largest draft to pycnocline ratio, which equals one.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/634532018-01-01T00:00:00ZCompressed Sensing and the Quadratic Bottleneck Problem: A Combinatorics Approach
http://hdl.handle.net/10852/63450
Compressed Sensing and the Quadratic Bottleneck Problem: A Combinatorics Approach
Sheehan, Kevin Patrick
Compressed sensing is the study of solving underdetermined systems of linear equations with unique sparse solutions. In addition to this, applications of compressed sensing require that the number of rows of the measurement matrix is minimized. In general, the entries of a measurement matrix can be complex. A combinatorial measurement matrix narrows this down to just zeros and ones. These measurement matrices may be obtained from other objects such as the incidence matrix of a combinatorial design or the bipartite adjacency matrix of a bipartite graph. In many applications, a measurement matrix is normally a randomly constructed matrix. This is because it has been shown that with high probability, certain classes of random measurement matrices have on the order of the optimal number of rows required for sparse reconstruction. Finding deterministically constructed classes of measurement matrices whose number of rows scale on the same order as classes of random measurement matrices has been an open problem for at least a decade. This problem is referred to as the quadratic bottleneck problem.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/634502018-01-01T00:00:00ZInvestigating the Interaction Between Morphology of the Anterior Bend and Aneurysm Initiation
http://hdl.handle.net/10852/63389
Investigating the Interaction Between Morphology of the Anterior Bend and Aneurysm Initiation
Kjeldsberg, Henrik Aasen
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/633892018-01-01T00:00:00ZSpectral investigation of annular flow
http://hdl.handle.net/10852/63369
Spectral investigation of annular flow
Piterskaya, Anna
This graduate thesis has been devoted to the examination of spectral methods. The object of the research is to analyse the accuracy and precision of different numerical schemes developed on the basis of mathematical models for the Poisson and biharmonic equations, as well as to evaluate their capacity to produce reliable and consistent results in the context of solving a problem of the motion of a single-phase fluid in the region under consideration. The region under consideration has been defined by annular geometry as follows: The two-dimensional case has been defined by the geometry of an annulus with an inner radius of r0 and an outer radius of r1; The three-dimensional case has been defined by the geometry of inner and outer surfaces of two cylindrical tubes inserted one into the other, where the radius of the inner cylinder is given by r0 and the radius of the outer cylinder is given by r1. Of all the spectral methods, only two basic methods have been considered in detail in the thesis: the Galerkin method and the collocation method. As the basis function we have chosen the orthogonal Legendre and Chebyshev polynomials for a bounded non-periodic interval in the radial direction and the Fourier series for periodic intervals in the angular and z coordinate directions. The Galerkin and collocation methods have been used to approximate the Poisson and biharmonic equations. The relevant algorithms have been worked out to create numerical schemes in the Python programming language and to implement them in computer software. The research on the Poisson and biharmonic equations has helped to elaborate a new technique for discretising the Navier-Stokes equations in a three-dimensional space. Here it is worth noting that this is a new and to some extent unique material, because for problems such as the motion of a single-phase fluid in annular geometry there have not been any algorithms designed on the basis of spectral methods until now. The analysis of the numerical results has illustrated that both methods show good spectral convergence. However, a detailed investigation has also been carried out to clarify the extent to which both methods produce consistent results in the presence of noise caused by different errors. The correlation between the condition number of matrices and the number of quadrature points has also been studied in detail in this way. Further analysis of this particular issue has shown that when using the collocation method, the matrix of the biharmonic equation is affected by round-off-error-dependent distortions. The research has demonstrated that the Galerkin methods for both the Poisson and biharmonic equations in annular geometry are reliable and consistent. Therefore, it would seem reasonable to conclude that the Galerkin methods hold significant potential for investigating more complex problems.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/633692018-01-01T00:00:00ZA Pipeline for Extraction of Patient-Specific Geometries with Machine Learning
http://hdl.handle.net/10852/63309
A Pipeline for Extraction of Patient-Specific Geometries with Machine Learning
Florvaag, Per Magne
Modeling of the blood flow in and around aneurysms with computational fluid dynamics (CFD) is important to better understand why aneurysms form and rupture. CFD modeling requires an accurate representation of the patient-specific arteries for simulations to be reproducible and reflect the reality. State-of-the-art methods use semi-manual tools to extract patient-specific geometries, which result in inconsistent results and a lot of tedious work. This limits the potential clinical impact of CFD-based aneurysm modeling. In this thesis, we develop an automated pipeline for extracting consistent patient-specific geometries. The pipeline consists of two parts: 1) Image restoration based on dictionary learning, and 2) vessel extraction by multiscale segmentation techniques. We show that dictionary learning based methods are able to restore (denoise and inpaint) 3D computed tomography (CT) images, and multiscale segmentation techniques can accurately extract both small and large arteries. Finally, we summarize the proposed pipeline and show its efficiency on a number of 3D CT images from the Aneurisk Project. The suggested pipeline is provided as a ready-to-use python library.
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/633092018-01-01T00:00:00ZDetecting valvular event times from echocardiograms using deep neural networks
http://hdl.handle.net/10852/61922
Detecting valvular event times from echocardiograms using deep neural networks
Roald, Marie
The timing of cardiac events is essential for the analysis of certain components of myocardial function [36]. Finding an algorithm that detects these timings has therefore been the subject of several studies[36]. In this project, deep neural networks were used to detect the valvular event times from echocardiography sequences. Three classes of neural network algorithms were tested: fully convolutional architectures [48], VGG [47] inspired architectures and recurrent neural networks. Temporal information was also incorporated by feeding the network the relative time passed since the last QRS peak. It was found that incorporating temporal information was necessary for detecting the valvular event times with an acceptable accuracy. The model providing the highest performance metrics was a VGG inspired architecture with both an RNN head and the relative time since the QRS peak. A version of this model that did not include the relative times was visualised using both guided backpropagation [48] and image occlusion [54], which demonstrated that the position and movement of the valves were important for correctly predicting valvular events. The best model achieved a 93% test accuracy and correctly detected all the valvular events in 7 out of 11 test series with a mean error of 1.03 frames. This is not satisfactory for clinical use. However, it does indicate that deep neural networks applied to echocardiography are a promising approach for automatic valvular event time detection
Mon, 01 Jan 2018 00:00:00 GMThttp://hdl.handle.net/10852/619222018-01-01T00:00:00ZComputational Analysis of a Drag reducing Cone Grid
http://hdl.handle.net/10852/61358
Computational Analysis of a Drag reducing Cone Grid
Utnes, Anders
CFD analysis of a cone grid applied to a flat plate, with mesh generation script.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10852/613582017-01-01T00:00:00ZB-splines in Machine Learning
http://hdl.handle.net/10852/61162
B-splines in Machine Learning
Douzette, Andre Sevaldsen
In the recent decade, artificial intelligence and machine learning has become increasingly popular for solving complex real-world problems. In particular problems which was believed to be very hard or in some cases impossible by computers have seen a surge in interest from both academia but also the indus- try. A recent example is the defeat of the worlds best Go player by Googles AlphaGo using deep neural networks. At the core of neural networks is the a part of the neurons called an activation function. This function is of mayor significance in how the network operates, but is often overlooked. One most often picks one of the commonly chosen non-adaptive functions as activation function. There have been research into using adaptive sigmoid or ReLU functions, but these have the drawbacks that the adaptations caused by data from one local region would effect the global domain. We therefore propose to use adaptive spline functions with free knots. Research into the field of spline networks have been limited in sophistication, with interpolating cubic splines being the most researched. The implementation in this thesis is using splines with B-splines as a basis, and is therefore free to use any polynomial degree.
Sun, 01 Jan 2017 00:00:00 GMThttp://hdl.handle.net/10852/611622017-01-01T00:00:00Z