In recent years, research in machine intelligence has gained increased momentum, where neural network models have made significant contributions in various fields, like image classification and language understanding. Recurrent neural networks (RNNs) are often the preferred approach for tasks like language understanding and time-series analysis. However, a known problem is their inefficiency to capture long-term temporal dependencies, giving rise to alternative RNNs. Long short-term memory (LSTM) and gated recurrent units (GRU) solve this problem, but on the expense of computational effort. As a result, convolutional neural networks (CNNs) have been explored for sequence modelling in recent years and shown to outperform RNNs in general. This efficiency, however, is examined only by a few comparative studies, where most primarily focus on language tasks. Similar studies are far more absent in the time-series classification domain, where traditional methods are often used. To address this shortcoming and further understand the effects of CNNs and RNNs in the time-series classification domain, we evaluate two shallow networks in this thesis, a CNN and an LSTM. We extend the few existing comparisons through an experimental approach and provide a baseline comparison of both for the time-series classification domain, where such studies are almost absent. To do so, we created an easily extensible system for running experiments and evaluated our models on three different datasets using cross-validation. We classify depressed patients using motor activity, predict the energy demand of Electric Vehicles (EVs) and classify readiness of football players. The system was used to evaluate CNN and LSTM separately for each dataset and is generalisable for multiple neural network models that can be used for similar comparative studies. We show that simple CNN achieves the same performance as LSTM and is faster to train. For two of our use cases, CNN is more than 30 times faster in terms of seconds used, but we see a trade-off between training time used in seconds and iterations, as CNN uses more training iterations. We conclude that for time-series classification, CNNs should be the preferred choice over LSTM, because of their effectiveness in performance and faster training.