GL
G. Lan
17 records found
1
Optical flow estimation with event cameras encompasses two primary algorithm classes: model-based and learning-based methods. Model-based approaches, do not require any training data while learning-based approaches utilize datasets of events to train neural networks. To effective
...
Computer vision tasks have shown to benefit greatly from both developments in deep learning networks, and the emergence of event cameras. Deep networks can require a large amount of training data, which is not readily available for event cameras, specifically for optical flow est
...
Event cameras are bio-inspired sensors with high dynamic range, high temporal resolution, and low power consumption. These features enable precise motion detection even in challenging lighting conditions and fast-changing scenes, rendering them well-suited for optical flow estima
...
The aim of this paper is to complete the gap in the knowledge and experiment using as little as only the heart rate of some subjects to manage to successfully authorise them in some supposed system. The focus will be on the Gaussian Mixture model and the One Class Support Vector
...
Performance of outlier detection on smartwatch data in single and multiple person environments
An analysis of the performance of different outlier detection methods on consumer-grade wearable data in environments with single and multiple subjects
Outlier detection is an essential part of modern systems. It is used to detect anomalies in behaviour or performance of systems or subjects, such as fall detection in smartwatches or voltage irregularity detection in batteries. This provides early indications of something of pote
...
Heart rate data and other data collected by consumer-grade wearable devices can give away quite useful information about the user. It can for example be used by machine learning algorithms such as Deep Neural Networks (DNN) to learn patterns about cardiovascular disease and fitne
...
Person identification using heart rate and activity from consumer-grade wearables
How do different types of cardiac diagnosis affect the accuracy of Deep Neural Networks to identify individuals by their heart rate?
Advancements in the precision and accuracy of consumer-grade wearables, such as a Fitbit, have enabled the identification and therefore authentication of individuals based on their emitted heart frequencies using these wrist-worn devices. With this type of authentication, a passw
...
In recent years, with the rapid expansion of IoT (Internet of Things) devices, more and more research and commercial projects have focused on various application areas of IoT. Signify, as a leading player in the smart home industry, has been deeply involved in this field for many
...
Badnets are a type of backdoor attack that aims at manipulating the behavior of Convolutional Neural Networks. The training is modified such that when certain triggers appear in the inputs the CNN is going to behave accordingly. In this paper, we apply this type of backdoor attac
...
Recent years have seen an increasing interest in stablecoins from major corporate and governmental parties. The European Central Bank is investigating the possibility of introducing its own Central Bank Digital Currency. The desired features of such a currency are under discussio
...
Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in
...
Black-box Adversarial Attacks using Substitute models
Effects of Data Distributions on Sample Transferability
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the m
...
Adversarial training and its variants have become the standard defense against adversarial attacks - perturbed inputs designed to fool the model. Boosting techniques such as Adaboost have been successful for binary classification problems, however, there is limited research in th
...
A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The resea
...
In recent years, there has been a great deal of studies about the optimisation of generating adversarial examples for Deep Neural Networks (DNNs) in a black-box environment. The use of gradient-based techniques to get the adversarial images in a minimal amount of input-output cor
...
Natural Language Interfaces for Databases (NLIDBs) offer a way for users to reason about data. It does not require the user to know the data structure, its relations, or familiarity with a query language like SQL. It only requires the use of Natural Language. This thesis focuses
...
Recent works explain the DNN models that perform image classification tasks following the "attribution, human-in-the-loop, extraction" workflow. However, little work has looked into such an approach for explaining DNN models for language or multimodal tasks. To address this gap,
...