We are living in an ever-more connected world where everyday life environments are integrated with a proliferation of devices that continuously produce unbounded data flows that have to be processed “on the fly” in order to detect operational exceptions, deliver real-time alerts, and trigger automated actions. This paradigm extends to a wide spectrum of applications with high socio-economic impact, like systems for healthcare, emergency management, surveillance, intelligent transportation and many others.
The data streaming domain belongs to the Big Data ecosystem. High-frequency data streams featuring time-varying characteristics represent one of the most challenging aspects in the design of applications and frameworks. This is especially critical in case of strict performance requirements (e.g., throughput and latency) that must be met despite an unexpected workload variability or the dynamism of the execution environment.
High-performance solutions targeting today’s commodity parallel hardware are “a must” to enable efficient data stream processsing. This comprises run-time supports targeting multicores, GPU and FPGA co-processors, and large-scale distributed-memory systems like clusters, Clouds and recently Fog infrastructures. However, such solutions need autonomic logics in order to adapt the framework/applications to changing execution conditions and workloads. Examples are mechanisms and strategies to adapt the queries, the operators placement policies, intra-operator parallelism degree, scheduling strategies, load shedding rate and so forth.
Topics of interest include, but are not limited to, the following:
- Parallel models for streaming applications
- Stream processing in Cloud and Fog computing environments
- Parallel continuous queries
- Sliding-window queries
- High-level parallel patterns
- Autonomic solutions based on Control Theory and Artificial Intelligence methods
- Strategies for operator and query placement
- Stream processing on heterogeneous and reconfigurable hardware
- Out-of-order data streams
- Burstiness and workload variations
- Stream scheduling strategies and load balancing
- Adaptive load shedding
- Integration of elasticity supports in existing frameworks
- Applications and use cases in various domains including Smart Cities, Internet of Things, Finance, Social Media, and Healthcare
Submissions in PDF format should be between 10–12 pages in the Springer LNCS style, which can be downloaded from the Springer Web site. The 12 pages limit is a hard limit. It includes everything (text, figures, references) and will be strictly enforced by the submission system. Complete LaTeX sources must be provided for accepted papers. All submitted research papers will be peer-reviewed. Only contributions that are not submitted elsewhere or currently under review will be considered. Accepted papers will be included in the workshop proceedings, published by Springer in the ARCoSS/LNCS series. Authors of accepted papers will have to sign a Springer copyright form.
Papers have to be submitted through EasyChair using the following link:
The best papers presented at the workshop will be invited to contribute to a special issue on a high quality peer-reviewed indexed journal. The special issue details will be published soon in this page.