Automatic knitting machines are robust, digital fabrication devices that enable rapid and reliable production of attractive, functional objects by combining stitches to produce unique physical properties. However, no existing design tools support optimization for desirable physical and aesthetic knitted properties. We present KnitGIST (Generative Instantiation Synthesis Toolkit for knitting), a program synthesis pipeline and library for generating hand- and machine-knitting patterns by intuitively mapping objectives to tactics for texture design. KnitGIST generates a machine-knittable program in a domain-specific programming language.2020-10-20
Recent recommender systems have started to employ knowledge distillation, which is a model compression technique distilling knowledge from a cumbersome model (teacher) to a compact model (student), to reduce inference latency while maintaining performance. The state-of-the-art methods have only focused on making the student model to accurately imitate the predictions of the teacher model. They have a limitation in that the prediction results incompletely reveal the teacher's knowledge. In this paper, we propose a novel knowledge distillation framework for recommender system, called DE-RRD, which enables the student model to learn from the latent knowledge encoded in the teacher model as well as from the teacher's predictions. Concretely, DE-RRD consists of two methods: 1) Distillation Experts (DE) that directly transfers the latent knowledge from the teacher model. DE exploits "experts" and a novel expert selection strategy for effectively distilling the vast teacher's knowledge to the student with limited capacity. 2) Relaxed Ranking Distillation (RRD) that transfers the knowledge revealed from the teacher's prediction with consideration of the relaxed ranking orders among items. Our extensive experiments show that DE-RRD outperforms the state-of-the-art competitors and achieves comparable or even better performance to that of the teacher model with faster inference time.2020-10-19
Apache Flink is an open-source system for scalable processing of batch and streaming data. Flink does not natively support efficient processing of spatial data streams, which is a requirement of many applications dealing with spatial data. Besides Flink, other scalable spatial data processing platforms including GeoSpark, Spatial Hadoop, etc. do not support streaming workloads and can only handle static/batch workloads. To fill this gap, we present GeoFlink, which extends Apache Flink to support spatial data types, indexes and continuous queries over spatial data streams. To enable efficient processing of spatial continuous queries and for the effective data distribution across Flink cluster nodes, a gird-based index is introduced. GeoFlink currently supports spatial range, spatial kNN and spatial join queries on point data type. An experimental study on real spatial data streams shows that GeoFlink achieves significantly higher query throughput than ordinary Flink processing.2020-10-19
We study the problem of merging decision trees: Given k decision trees $T_1,T_2,T_3...,T_k$, we merge these trees into one super tree T with (often) much smaller size. The resultant super tree T, which is an integration of k decision trees with each leaf having a major label, can also be considered as a (lossless) compression of a random forest. For any testing instance, it is guaranteed that the tree T gives the same prediction as the random forest consisting of $T_1,T_2,T_3...,T_k$ but it saves the computational effort needed for traversing multiple trees. The proposed method is suitable for classification problems with time constraints, for example, the online classification task such that it needs to predict a label for a new instance before the next instance arrives. Experiments on five datasets confirm that the super tree T runs significantly faster than the random forest with k trees. The merging procedure also saves space needed storing those k trees, and it makes the forest model more interpretable, since naturally one tree is easier to be interpreted than k trees.2020-10-19
Individuals with Autism Spectrum Disorder (ASD) often face difficulties creating and maintaining social connections with others, which has been shown to negatively affect their well-being. Some researchers have investigated whether social media use can lead to social benefits, but with mixed results. To better understand how social media use can be beneficial and what challenges it poses, we conducted an interview study with eight adults on the Autism Spectrum. We report on the perceived benefits and real challenges participants faced when trying to engage with others through social media. Often the benefits users hope for are overshadowed by negative ramifications and safety risks that accompany their social media use. We conclude with recommendations for designing social media for neurodiverse users.2020-10-17
Live streaming is an increasingly popular communication medium that allows real-time interaction among a broadcaster and an audience of any size. Using archived YouTube live video transcripts and associated live chat messages, we find evidence for emotional contagion in live streams: sentiment in live video oral transcripts and viewers? text chat is associated with the sentiment in subsequent viewers? comments. This relationship is stronger between viewers? chat messages and the subsequent chat than between the oral messages in the video and the subsequent chat. However, in some types of live streams, negative sentiment in the live video is followed by less negative chat. We conclude with a discussion of future research and potential uses of the dataset.2020-10-17
It is essential that users understand how algorithmic decisions are made, as we increasingly delegate important decisions to intelligent systems. Prior work has often taken a techno-centric approach, focusing on new computational techniques to support transparency. In contrast, this article employs empirical methods to better understand user reactions to transparent systems to motivate user-centric designs for transparent systems. We assess user reactions to transparency feedback in four studies of an emotional analytics system. In Study 1, users anticipated that a transparent system would perform better but unexpectedly retracted this evaluation after experience with the system. Study 2 offers an explanation for this paradox by showing that the benefits of transparency are context dependent. On the one hand, transparency can help users form a model of the underlying algorithm's operation. On the other hand, positive accuracy perceptions may be undermined when transparency reveals algorithmic errors. Study 3 explored real-time reactions to transparency. Results confirmed Study 2, in showing that users are both more likely to consult transparency information and to experience greater system insights when formulating a model of system operation. Study 4 used qualitative methods to explore real-time user reactions to motivate transparency design principles. Results again suggest that users may benefit from initially simplified feedback that hides potential system errors and assists users in building working heuristics about system operation. We use these findings to motivate new progressive disclosure principles for transparency in intelligent systems and discuss theoretical implications.2020-10-16
The study of morphogenesis has increasingly entailed the use of computer simulations to predict intricate behaviours from these systems, which has led to the development of tools for computational biologists to build their own simulations. However, uptake of these tools by experimental biologists has been slow and concerns remain over the assumptions underlying such tools, which are not readily explicable to a domain expert without prior programming knowledge. To demonstrate how these concerns might be addressed, we propose the creation of a domain-specific language (DSL) of one such simulation, the MemAgent-Spring model (MSM). By designing this DSL around regimented biological concepts identified by observing discussions between experimentalists and modellers, we hope to better understand how the usability and reproducibility of the MSM might be improved, therefore potentially increasing its usage by experimentalists.2020-10-16
The increasing performance demands and certification needs of complex cyber-physical systems (CPS) raise the complexity of the engineering process, not only within the development phase, but also in the Verification and Validation (V&V) phase. A proven technique to handle the complexity of CPSs is Model-Based Design (MBD). Nevertheless, the verification and validation of complex CPSs is still an exhaustive process and the usability of the models to front-load V&V activities heavily depends on the knowledge of the models and the correctness of the conducted virtual experiments. In this paper, we explore how the effort (and cost) of the V&V phase of the engineering process of complex CPSs can be reduced by enhancing the knowledge about the system components, and explicitly capturing it within their corresponding validity frame. This effort reduction originates from exploiting the captured system knowledge to generate efficient V&V processes and by automating activities at different model life stages, such as the setup and execution of boundary-value or fault-injection tests. This will be discussed in the context of a complex CPS: a safety-critical adaptive cruise control system.2020-10-16
Humans often switch between multiple levels of abstraction when reasoning about salient properties of complex systems. These changes in perspective may be leveraged at runtime to improve both performance and explainability, while still producing identical answers to questions about the properties of interest. This technique, which switches between multiple abstractions based on changing conditions in the modelled system, is also known as adaptive abstraction.
The Modelica language represents systems as a-causal continuous equations, which makes it appropriate for the modelling of physical systems. However adaptive abstraction requires dynamic structure modelling. This raises many technical challenges in Modelica since it has poor support for modifying connections during simulation. Its equation-based nature means that all equations need to be well-formed at all times, which may not hold when switching between levels of abstraction. The initialization of models upon switching must also be carefully managed, as information will be lost or must be created when switching abstractions .
One way to allow adaptive abstraction is to represent the system as a multi-mode hybrid Modelica model, a mode being an abstraction that can be switched to based on relevant criteria. Another way is to employ a co-simulation  approach, where modes are exported as "black boxes" and orchestrated by a central algorithm that implements adaptivity techniques to dynamically replace components when a switching condition occurs.
This talk will discuss the benefits of adaptive abstraction using Modelica, and the conceptual and technical challenges towards its implementation. As a stand-in for a complex cyber-physical system, an electrical transmission line case study is proposed where attenuation is studied across two abstractions having varying fidelity depending on the signal. Our initial results, as well as our explorations towards employing Modelica models in a co-simulation context using the DEVS formalism  are discussed. A Modelica only solution allows to tackle complexity via decomposition, but does not improve performances as all modes are represented as a single set of equations. The co-simulation approach might offer better performances , but complicates the workflow.2020-10-16