绑定机构
扫描成功 请在APP上操作
打开万方数据APP,点击右上角"扫一扫",扫描二维码即可将您登录的个人账号与机构账号绑定,绑定后您可在APP上享有机构权限,如需更换机构账号,可到个人中心解绑。
欢迎的朋友
万方知识发现服务平台
获取范围
  • 1 / 97
  (已选择0条) 清除 结果分析
找到 1923 条结果
摘要:Mobile application is a kind of an application system delivered via on mobile phone, it is basically network-enabled convey of skills and knowledge. Due to advanced technology of smart phones, this application has become an important system that facilitates a large amount of people to access the information efficiently. The aim of this research is to design and implement a mobile application which is able to disseminate student’s results of their exams. We have developed this application with Java Programming Language, Phased model as Software Development methodology and Android technology href="#ref1">[1]. This research has used following methods in terms of collecting data; This research has used following methods in terms of collecting data: documentation, interview and observation techniques. The researcher has concluded that the system has been successfully implemented using Phased Model Methodology....
摘要:The APEXeditor, an Excel-based tool has been developed using the Visual Basic for Applications (VBA) to provide a graphical user interface (GUI) to the Agricultural Policy Environmental eXtender (APEX) model. APEX, in its native form, requires users to edit text files for modifying input files; therefore a GUI interface can aid users in modification of these files and reduce errors. Microsoft Excel is a popular spreadsheet program that has the largest user base among scientists and researchers, thus providing a relatively common platform in which stages the tool. The APEXeditor requires minimal additional learning to operate the tool for those who already have basic level of knowledge in Excel. The user can load APEX input files into the spreadsheet tool and the GUI offers meta information and provides functions to edit, write, and run the APEX model. Ultimately, the APEXeditor substitutes existing GUI programs such as WinAPEX or ArcAPEX that require installation or additional licensing. A series of scripts were developed as a back-end engine that automates data formatting and editing of linked APEX input ASCII files including database libraries. The simple architecture of the tool helps users maintain the quality of the data and allows error-free editing of APEX model inputs to characterize the system under study. This tool is suitable for all kinds of applications and has been successfully used for the creation of APEX model runs in numerous studies....
摘要:Because of the increasing attention on environmental issues, especially air pollution, predicting whether a day is polluted or not is necessary to people’s health. In order to solve this problem, this research is classifying ground ozone level based on big data and machine learning models, where polluted ozone day has class 1 and non-ozone day has class 0. The dataset used in this research was derived from the UCI Website, containing various environmental factors in Houston, Galveston and Brazoria area that could possibly affect the occurrence of ozone pollution [ href="#ref1">1]. This dataset is first filled up for further process, next standardized to ensure every feature has the same weight, and then split into training set and testing set. After this, five different machine learning models are used in the prediction of ground ozone level and their final accuracy scores are compared. In conclusion, among Logistic Regression, Decision Tree, Random Forest, AdaBoost, and Support Vector Machine (SVM), the last one has the highest test score of 0.949. This research utilizes relatively simple methods of forecasting and calculates the first accuracy scores in predicting ground ozone level; it can thus be a reference for environmentalists. Moreover, the direct comparison among five different models provides machine learning field an insight to determine the most accurate model. In the future, Neural Network can also be utilized to predict air pollution, and its test scores can be compared with the previous five methods to conclude the accuracy of Neuron Network....
摘要:This paper is giving an overview of the process of requirement analysis for software development. Here I have discussed about key parts in requirement analysing, gathering relevant materials, functional analysis and allocations, how to improve and make a quality process and also document development as well and many more which relates to requirement analysis process. The scope of this study is not a generalized approach but rather discuss through specific cases such as like Dutch flower case. It describes the main areas of requirement process in practice, and highlights. I hope that readers will find this paper useful in guiding them toward the knowledge and resources they needed....
摘要:We present a unique approach for communication deadlock analysis for actor-model which has an under-approximated analysis result. Our analysis detects narrowly defined communication deadlocks by finding a cyclic dependency relation in a novel dependency graph called the slave dependency graph. The slave dependency graph is based on a new relationship between Actors, slave dependency, defined by us. After that, we implement this theory in Soot, an analysis tool for Java, and use it to analyze actor-based Java program realized by Akka, a Java library that allows actor-based programming. We argue that our analysis can detect a specific kind of communication deadlock with the precise result, but has many limitations....
摘要:The performance and reliability of converting natural language into structured query language can be problematic in handling nuances that are prevalent in natural language. Relational databases are not designed to understand language nuance, therefore the question why we must handle nuance has to be asked. This paper is looking at an alternative solution for the conversion of a Natural Language Query into a Structured Query Language (SQL) capable of being used to search a relational database. The process uses the natural language concept, Part of Speech to identify words that can be used to identify database tables and table columns. The use of Open NLP based grammar files, as well as additional configuration files, assist in the translation from natural language to query language. Having identified which tables and which columns contain the pertinent data the next step is to create the SQL statement....
摘要:This paper presents an experiment using OPENBCI to collect data of two hand gestures and decoding the signal to distinguish gestures. The signal was extracted with three electrodes on the subiect’s forearm and transferred in one channel. After utilizing a Butterworth bandpass filter, we chose a novel way to detect gesture action segment. Instead of using moving average algorithm, which is based on the calculation of energy, We developed an algorithm based on the Hilbert transform to find a dynamic threshold and identified the action segment. Four features have been extracted from each activity section, generating feature vectors for classification. During the process of classification, we made a comparison between K-nearest-neighbors (KNN) and support vector machine (SVM), based on a relatively small amount of samples. Most common experiments are based on a large quantity of data to pursue a highly fitted model. But there are certain circumstances where we cannot obtain enough training data, so it makes the exploration of best method to do classification under small sample data imperative. Though KNN is known for its simplicity and practicability, it is a relatively time-consuming method. On the other hand, SVM has a better performance in terms of time requirement and recognition accuracy, due to its application of different Risk Minimization Principle. Experimental results show an average recognition rate for the SVM algorithm that is 1.25% higher than for KNN while SVM is 2.031 s shorter than that KNN....
摘要:We have considered a method called Enhanced Rollback Migration Protocol, which potentially has the effects of compressing the period of compensations in a long-lived transaction, since before. In general, a compensation transaction can recover an irregular status of a long-lived transaction into the original status without holding unnecessary resources by making its consistency tentatively loose. However, it has also been pointed out that there is a difficulty of maintaining the isolation between a pair of transactions when executed in parallel. In particular, this could be more prominent under modernized scalable cloud environments. Thus, there is a proposal for concurrency control for the service level. However, there is still another risk that more computer resources will be consumed than actually necessary and an unnecessary stagnation of the processing will be caused if concurrency control is naively applied without careful consideration. Therefore, we need to implement a functionality which can optimize the processing of a long-lived transaction by selecting a suitable method between concurrency control and compensation transactions. In this paper, we propose a method in which optimistic concurrency control is applied for long-lived transactions. Furthermore, a pair of verification phases is carried out. At the beginning from a safe point, an attempt of verification is done. Then if the difficulty of isolation on a long-lived transaction executed under a competitive situation is estimated, concurrency control for the service level is applied. Alternatively, a long-lived transaction without any concurrency control is executed. At the next reachable safe point, another attempt of verification is performed. Then if a failure of serialization is detected, a set of compensation transactions is invoked to recover the original long-lived transaction by returning to the first safe point. We evaluated this approach by using numerical simulations and confirmed the basic features. This approach can realize optimizing and enhancing the performance of a long-lived transaction. We regard this approach applicable even to the modernized scalable cloud environments....
摘要:Particle accelerators play an important role in a wide range of scientific discoveries and industrial applications. The self-consistent multi-particle simulation based on the particle-in-cell (PIC) method has been used to study charged particle beam dynamics inside those accelerators. However, the PIC simulation is time-consuming and needs to use modern parallel computers for high-resolution applications. In this paper, we implemented a parallel beam dynamics PIC code on multi-node hybrid architecture computers with multiple Graphics Processing Units (GPUs). We used two methods to parallelize the PIC code on multiple GPUs and observed that the replication method is a better choice for moderate problem size and current computer hardware while the domain decomposition method might be a better choice for large problem size and more advanced computer hardware that allows direct communications among multiple GPUs. Using the multi-node hybrid architectures at Oak Ridge Leadership Computing Facility (OLCF), the optimized GPU PIC code achieves a reasonable parallel performance and scales up to 64 GPUs with 16 million particles....
摘要:In recent years, Convolutional Neural Networks (CNNs) have enabled unprecedented progress on a wide range of computer vision tasks. However, training large CNNs is a resource-intensive task that requires specialized Graphical Processing Units (GPU) and highly optimized implementations to get optimal performance from the hardware. GPU memory is a major bottleneck of the CNN training procedure, limiting the size of both inputs and model architectures. In this paper, we propose to alleviate this memory bottleneck by leveraging an under-utilized resource of modern systems: the device to host bandwidth. Our method, termed CPU offloading, works by transferring hidden activations to the CPU upon computation, in order to free GPU memory for upstream layer computations during the forward pass. These activations are then transferred back to the GPU as needed by the gradient computations of the backward pass. The key challenge to our method is to efficiently overlap data transfers and computations in order to minimize wall time overheads induced by the additional data transfers. On a typical work station with a Nvidia Titan X GPU, we show that our method compares favorably to gradient checkpointing as we are able to reduce the memory consumption of training a VGG19 model by 35% with a minimal additional wall time overhead of 21%. Further experiments detail the impact of the different optimization tricks we propose. Our method is orthogonal to other techniques for memory reduction such as quantization and sparsification so that they can easily be combined for further optimizations....
摘要:The use of mobile phone applications for our touristic activity is very common nowadays with the simplification of smartphones. The tourism mobile applications currently can be argued to be one of the most useful applications that can facilitate the movement of travelers. However, existing usability evaluation metrics are too general to be applied to a more specific application, such as mobile tourism application. Thus, the objective of the study is to propose usability evaluation metrics for tourism mobile applications. The study employs four phases: identifying problem and the objective, encompassing the techniques of developing usability measurement of metrics, selecting the usability metrics of tourism mobile application and conducting expert review and verification. The verification phase was conducted using expert review approach to measure the proposed metrics in terms of its consistency ease of use, understandable, verifiable and overall impression. The finding revealed that the proposed metrics have been well received by the experts in terms of consistency, ease to use, understandable, verifiability and overall impression. Finally, this study presented usability metrics for the tourism mobile applications that can be used by designers or usability practitioners in creating a useable mobile application for the tourists....
摘要:This study presents a parameter selection strategy developed for the Stretch-Blow Molding (SBM) process to minimize the weight of preforms used. The method is based on a predictive model developed using Neural Networks. The temperature distribution model of the preform was predicted using a 3-layer NN model with supervised backpropagation learning. In addition, the model was used to predict the uniform air pressure applied inside the preform, taking into account the relationship between the internal air pressure and the volume of the preform. Parameters were validated using in situ tests and measurements performed on several weights and lengths of a 0.330 Liter Polyethylene Terephthalate (PET) bottles. Tests showed that the model adequately predicts both the blowing kinematics, mainly zone temperatures and blowing and stretching pressures along the walls of the bottle while maintaining the bottle strength and top load requirements. In the second step, the model was combined to automatically compute the lowest preform weight that can be used for a particular 330 ml bottle design providing a uniform wall thickness distribution....
摘要:Software development is a complex and difficult task that requires the investment of sufficient resources and carries major risk of failure. Model Driven Engineering (MDE) focuses on creating software models and automating code generation from the models. Model Driven Software Development (MDSD) offers significantly more effective approaches. These approaches improve the way of building software. Model driven approaches partially increase developer productivity, decrease the cost of software construction, improve software reusability, and make software more maintainable. This paper investigates the methods where Model Driven Software Development is integrated with Software Product Line (SPL). This SLR has been conducted to identify 71 research works published since 2014. We have collected 18 tools, 14 techniques and 17 languages used for MDSD for SPL. We analyze which technique is suitable for SPL. We compare the techniques on the basis of features provided by these tools to understand the better-quality results....
摘要:Requirements of a system keep on changing based on the need of stakeholders or the system developers, making requirement engineering an important aspect in software development. This develops a need for appropriate requirement change management. The importance of requirements traceability is defining relationships between the requirements and artefacts extracted by the stakeholder during the software development life-cycle and gives vital information to encourage software understanding. In this paper, we have concentrated on developing a tool for requirement traceability that can be used to extend the requirement elicitation and identification of system-wide qualities using the notion of quality attribute scenarios to capture the non-functional requirements. It allows us to link the functional and non-functional requirements of the system based on the quality attribute scenarios template proposed by the Carnegie Mellon Software Engineering Institute (SEI). Apart from this, the paper focuses on tracing the functional and non-functional requirements of the system using the concept of requirement traceability matrix....
摘要:The ability to accurately estimate the cost needed to complete a specific project has been a challenge over the past decades. For a successful software project, accurate prediction of the cost, time and effort is a very much essential task. This paper presents a systematic review of different models used for software cost estimation which includes algorithmic methods, non-algorithmic methods and learning-oriented methods. The models considered in this review include both the traditional and the recent approaches for software cost estimation. The main objective of this paper is to provide an overview of software cost estimation models and summarize their strengths, weakness, accuracy, amount of data needed, and validation techniques used. Our findings show, in general, neural network based models outperforms other cost estimation techniques. However, no one technique fits every problem and we recommend practitioners to search for the model that best fit their needs....
摘要:This paper presents a new conception model of school transportation supply-demand ratio (STSDR) in order to define the number of school buses needed in a limited area and to describe the conditions of school transport system. For this purpose, a mathematical equation was elaborated to simulate the real system based on the school transport conditions and on the estimated results of STSDR from 15 zones of Cuenca city in Ecuador. The data used in our model was collected from several diverse sources (i.e. administrative data and survey data). The estimated results have shown that our equation has described efficiently the school transport system by reaching an accuracy of 96%. Therefore, our model is suitable for statistical estimation given adequate data and will be useful in school transport planning policy. Given that, it is a support model for making decisions which seek efficiency in supply and demand balance....
摘要:Recently, several deep learning models have been successfully proposed and have been applied to solve different Natural Language Processing (NLP) tasks. However, these models solve the problem based on single-task supervised learning and do not consider the correlation between the tasks. Based on this observation, in this paper, we implemented a multi-task learning model to joint learn two related NLP tasks simultaneously and conducted experiments to evaluate if learning these tasks jointly can improve the system performance compared with learning them individually. In addition, a comparison of our model with the state-of-the-art learning models, including multi-task learning, transfer learning, unsupervised learning and feature based traditional machine learning models is presented. This paper aims to 1) show the advantage of multi-task learning over single-task learning in training related NLP tasks, 2) illustrate the influence of various encoding structures to the proposed single- and multi-task learning models, and 3) compare the performance between multi-task learning and other learning models in literature on textual entailment task and semantic relatedness task....
摘要:The contemporary scientific literature that deals with the dynamics of marine chlorophyll-a concentration is already customarily employing data mining techniques in small geographic areas or regional samples. However, there is little focus on the issue of missing data related to chlorophyll-a concentration estimated by remote sensors. Intending to provide greater scope to the identification of the spatiotemporal distribution patterns of marine chlorophyll-a concentrations, and to improve the reliability of results, this study presents a data mining approach to cluster similar chlorophyll-a concentration behaviors while implementing an iterative spatiotemporal interpolation technique for missing data inference. Although some dynamic behaviors of said concentrations in specific areas are already known by specialists, systematic studies in large geographical areas are still scarce due to the computational complexity involved. For this reason, this study analyzed 18 years of NASA satellite observations in one portion of the Western Atlantic Ocean, totaling more than 60 million records. Additionally, performance tests were carried out in low-cost computer systems to check the accessibility of the proposal implemented for use in computational structures of different sizes. The approach was able to identify patterns with high spatial resolution, accuracy and reliability, rendered in low-cost computers even with large volumes of data, generating new and consistent patterns of spatiotemporal variability. Thus, it opens up new possibilities for data mining research on a global scale in this field of application....
摘要:Recent advancements in computing research and technology will allow future immersive virtual reality systems to be voxel-based, i.e. entirely based on gap-less, spatial representations of volumetric pixels. The current popularity of pixel-based videoconferencing systems could turn into true telepresence experiences that are voxel-based. Richer, non-verbal communication will be possible thanks to the three-dimensional nature of such systems. An effective telepresence experience is based on the users’ sense of copresence with others in the virtual environment and on a sense of embodiment. We investigate two main quality of service factors, namely voxel size and network latency, to identify acceptable threshold values for maintaining the copresence and embodiment experience. We present a working prototype implementation of a voxel-based telepresence system and can show that even a coarse 64 mm voxel size and an overall round-trip latency of 542 ms are sufficient to maintain copresence and embodiment experiences. We provide threshold values for noticeable, disruptive, and unbearable latencies that can serve as guidelines for future voxel and other telepresence systems....
摘要:We describe four fundamental challenges that complex real-life Virtual Reality (VR) productions are facing today (such as multi-camera management, quality control, automatic annotation with cinematography and 360 style="white-space:nowrap;">˚ depth estimation) and describe an integrated solution, called Hyper 360, to address them. We demonstrate our solution and its evaluation in the context of practical productions and present related results....
  (已选择0条) 清除
公   告

北京万方数据股份有限公司在天猫、京东开具唯一官方授权的直营店铺:

1、天猫--万方数据教育专营店

2、京东--万方数据官方旗舰店

敬请广大用户关注、支持!查看详情

手机版

万方数据知识服务平台 扫码关注微信公众号

万方选题

学术圈
实名学术社交
订阅
收藏
快速查看收藏过的文献
客服
服务
回到
顶部