My direction
My teaching course range from Software Engineering and Network Engineering.
This article presents software library for the Arduino platform which significantly improves the speed of the functions for digital input and output. This allows the users to apply these functions in whole range of applications, without being forced to resort to direct register access or various 3rd party libraries when the standard Arduino functions are too slow for given application. The method used in this library is applicable also to other libraries which aim to abstract the access to general purpose pins of a microcontroller.
@article {YuToward2015, title={Toward Core Point Evolution Using Water Ripple Model}, author={Zhibing Yu and Kun Ma}, journal={WSEAS Transactions on Computers}, pages={819-825}, year={2015}, volume={14}, number={Art. #79}}
针对重复数据检测过程中增量数据重复值检测问题进行分析,在基本近邻排序算法基础上,提出增量近邻排序比较算法。该算法通过跳动窗口形式比较相邻数据,大大减少了数据比较次数;同时引入MapReduce模型对该算法加以改进以提高其海量数据处理的能力。实验表明,改进后的增量近邻排序比较算法在保证检则结果准确的前提下,能够有效提高增量数据重复检测的速度,并且算法具有较高的稳定性,更适应海量数据环境中重复数据检测任务。
@article{董富森2015mapreduce, title={MapReduce 模型下增量重复数据检测方法}, author={董富森 and 杨波 and 马坤 and 王文华}, journal={济南大学学报 (自然科学版)}, volume={4}, pages={001}, year={2015}}
Maintaining data indexes and query cache becomes the bottleneck of the database, especially in the context of frequently updated data. In order to reduce the burden of the database, a cache system for frequently updated data has been proposed in this paper. In the system, update statements are parsed firstly. Then updated data are saved as key-value pairs in the cache and they are synchronized into the database at idle time. Experimental results show that the proposed cache system cannot only accelerate the data updating rate, but also improve the data writing ability in maintaining indexes and consistency of cache data greatly.
@article {DongCache2015, title={Cache System for Frequently Updated Data in the Cloud}, author={Fusen Dong and Kun Ma and Bo Yang}, journal={WSEAS Transactions on Computers}, pages={163-170}, year={2015}, volume={14}, number={Art. #17}}
Content syndication is the process of pushing the information out into third-party information providers. The idea is to drive more engagement with your content by wiring it into related digital contexts. However, there are some shortages of current related products, such as search challenges on massive feeds, synchronization performance, and user experience. To address these limitations, we aim to propose an improved architecture of content syndication and recommendation. First, we design a source listener to extract feed changes from different RSS sources, and propagate the incremental changes to target schema-free document stores to improve the search performance. Second, the proposed recommendation algorithm is to tidy, filter, and sort all the feeds before pushing them to the users automatically. Third, we provide some OAuth2-authorization RESTful feed sharing APIs for the integration with the third-party systems. The experimental result shows that this architecture speeds up the search and synchronization process, and provides friendlier user experience.
@article {TangRSSCube2014, title={RSSCube: A Content Syndication and Recommendation Architecture}, author={Zijie Tang and Kun Ma}, journal={International Journal of Database Theory and Application}, pages={237-248}, year={2014}, volume={7}, number={4}}
Purpose
Propaganda is a prevalent technique used in social media to intentionally express opinions or actions with the aim of manipulating or deceiving users. Existing methods for propaganda detection primarily focus on capturing language features within its content. However, these methods tend to overlook the information presented within the external news environment from which propaganda news originated and spread. This news environment reflects recent mainstream media opinions and public attention and contains language characteristics of non-propaganda news. Therefore, the authors have proposed a graph-based multi-information integration network with an external news environment (abbreviated as G-MINE) for propaganda detection.Design/methodology/approach
G-MINE is proposed to comprise four parts: textual information extraction module, external news environment perception module, multi-information integration module and classifier. Specifically, the external news environment perception module and multi-information integration module extract and integrate the popularity and novelty into the textual information and capture the high-order complementary information between them.Findings
G-MINE achieves state-of-the-art performance on both the TSHP-17, Qprop and the PTC data sets, with an accuracy of 98.24%, 90.59% and 97.44%, respectively.Originality/value
An external news environment perception module is proposed to capture the popularity and novelty information, and a multi-information integration module is proposed to effectively fuse them with the textual information.
Multi-choice reading comprehension is a task that involves selecting the correct option from a set of option choices. Recently, the attention mechanism has been widely used to acquire embedding representations. However, there are two significant challenges: 1) generating the contextualized representations, namely, drawing associated information, and 2) capturing the global interactive relationship, namely, drawing local semantics. To address these issues, we have proposed the Dual Integrated Matching Network (DIMN) for multi-choice reading comprehension. It consists of two major parts. Fusing Information from Passage and Question-option pair into Enhanced Embedding Representation (FEER) is proposed to draw associated information to enhance embedding representation, which incorporates the information that reflects the most salient supporting entities to answer the question into the contextualized representations; Linear Integration of Co-Attention and Convolution (LIAC) is proposed to capture the interactive information and local semantics to construct global interactive relationship, which incorporates local semantics of a single sequence into the question-option-aware passage and passage-aware question-option representation. The experiments are shown that our DIMN performs better accuracy on three datasets: RACE (69.34%), DREAM (68.45%) and MCTest (71.81% on MCTest160 and 78.83% on MCTest500). Our DIMN is beneficial for improving the ability of machines to understand natural language. The system we have developed has been applied to customer service support. Our source code is accessible at https://github.com/vqiangv/DIMN}{https://github.com/vqiangv/DIMN.
@article{WEI2024107694, title = {DIMN: Dual Integrated Matching Network for multi-choice reading comprehension}, journal = {Engineering Applications of Artificial Intelligence}, volume = {130}, pages = {107694}, year = {2024}, issn = {0952-1976}, doi = {https://doi.org/10.1016/j.engappai.2023.107694}, url = {https://www.sciencedirect.com/science/article/pii/S095219762301878X}, author = {Qiang Wei and Kun Ma and Xinyu Liu and Ke Ji and Bo Yang and Ajith Abraham}, keywords = {Multi-choice reading comprehension, Contextualized representation, Global interactive relationship, Attention, Convolution}, abstract = {Multi-choice reading comprehension is a task that involves selecting the correct option from a set of option choices. Recently, the attention mechanism has been widely used to acquire embedding representations. However, there are two significant challenges: (1) generating the contextualized representations, namely, drawing associated information, and (2) capturing the global interactive relationship, namely, drawing local semantics. To address these issues, we have proposed the Dual Integrated Matching Network (DIMN) for multi-choice reading comprehension. It consists of two major parts. Fusing Information from Passage and Question-option pair into Enhanced Embedding Representation (FEER) is proposed to draw associated information to enhance embedding representation, which incorporates the information that reflects the most salient supporting entities to answer the question into the contextualized representations; Linear Integration of Co-Attention and Convolution (LIAC) is proposed to capture the interactive information and local semantics to construct global interactive relationship, which incorporates local semantics of a single sequence into the question-option-aware passage and passage-aware question-option representation. The experiments are shown that our DIMN performs better accuracy on three datasets: RACE (69.34%), DREAM (68.45%) and MCTest (71.81% on MCTest160 and 78.83% on MCTest500). Our DIMN is beneficial for improving the ability of machines to understand natural language. The system we have developed has been applied to customer service support. Our source code is accessible at https://github.com/vqiangv/DIMN.} }
@article{tang2022long, title={Long text feature extraction network with data augmentation}, author={Tang, Changhao and Ma, Kun and Cui, Benkuan and Ji, Ke and Abraham, Ajith}, journal={Applied Intelligence}, pages={1--16}, year={2022}, publisher={Springer} }
在线问答社区(Community Question Answering, CQA)已经成为互联网最重要的知识分享交流平台,将用户提出的海量问题有效推荐给可能解答的用户,挖掘用户感兴趣的问题是此类平台最核心功能。一些针对问答社区的专家推荐算法已经被提出用来提高平台解答效率,但是现有工作大多关注于用户兴趣与问题信息匹配,忽视了用户兴趣动态变化问题,可能会严重影响推荐质量。本文提出了结合注意力与循环神经网络的专家推荐算法,不仅实现了问题信息的深度特征编码,而且还能捕获动态变化的用户兴趣。首先,问题编码器在预训练词嵌入基础上结合CNN卷积神经网络和Attention注意力机制实现了问题标题与绑定标签的深度特征联合表示。然后,用户编码器在用户历史回答问题的时间序列上利用长短期记忆神经网络Bi-GRU模型捕捉动态兴趣,并结合用户固定标签信息表征长期兴趣。最后,根据两个编码器输出向量的相似性计算产生用户动态兴趣与长期兴趣相结合的推荐结果。我们在来自于知乎问答社区的真实数据上进行了不同参数配置及不同算法的对比实验,表明该算法性能要明显优于目前比较流行的深度学习专家推荐算法。
In recent years, neural network-based models such as machine learning and deep learning have achieved excellent results in text classification. On the research of marketing intention detection, classification measures are adopted to identify news with marketing intent. However, most of current news appears in the form of dialogs. There are some challenges to find potential relevance between news sentences to determine the latent semantics. In order to address this issue, this paper has proposed a CLSTM-based topic memory network (called CLSTM-TMN for short) for marketing intention detection. A ReLU-Neuro Topic Model (RNTM) is proposed. A hidden layer is constructed to efficiently capture the subject document representation, Potential variables are applied to enhance the granularity of subject model learning. We have changed the structure of current Neural Topic Model (NTM) to add CLSTM classifier. This method is a new combination ensemble both long and short term memory (LSTM) and convolution neural network (CNN). The CLSTM structure has the ability to find relationships from a sequence of text input, and the ability to extract local and dense features through convolution operations. The effectiveness of the method for marketing intention detection is illustrated in the experiments. Our detection model has a more significant improvement in F1 (7%) than other compared models.
@article{WANG2020103595, title = "A CLSTM-TMN for marketing intention detection", journal = "Engineering Applications of Artificial Intelligence", volume = "91", pages = "103595", year = "2020", issn = "0952-1976", doi = "https://doi.org/10.1016/j.engappai.2020.103595", url = "http://www.sciencedirect.com/science/article/pii/S0952197620300671", author = "Yufeng Wang and Kun Ma and Laura Garcia-Hernandez and Jing Chen and Zhihao Hou and Ke Ji and Zhenxiang Chen and Ajith Abraham", keywords = "Text classification, Marketing intention, Topic memory, News", abstract = "In recent years, neural network-based models such as machine learning and deep learning have achieved excellent results in text classification. On the research of marketing intention detection, classification measures are adopted to identify news with marketing intent. However, most of current news appears in the form of dialogs. There are some challenges to find potential relevance between news sentences to determine the latent semantics. In order to address this issue, this paper has proposed a CLSTM-based topic memory network (called CLSTM-TMN for short) for marketing intention detection. A ReLU-Neuro Topic Model (RNTM) is proposed. A hidden layer is constructed to efficiently capture the subject document representation, Potential variables are applied to enhance the granularity of subject model learning. We have changed the structure of current Neural Topic Model (NTM) to add CLSTM classifier. This method is a new combination ensemble both long and short term memory (LSTM) and convolution neural network (CNN). The CLSTM structure has the ability to find relationships from a sequence of text input, and the ability to extract local and dense features through convolution operations. The effectiveness of the method for marketing intention detection is illustrated in the experiments. Our detection model has a more significant improvement in F1 (7%) than other compared models." }
为了有效检测移动端的未知恶意软件,提出一种基于机器学习算法,并结合提取的具有鲁棒性的网络流量统计特征,训练出具有未知移动恶意网络流量识别能力的检测模型;该模型主要包括Android恶意软件样本数据预处理、网络流量数据自动采集以及机器学习检测模型训练;通过对不同时间节点的零日恶意软件检测的实验,验证模型的有效性。结果表明,所提出的方法对未知恶意样本的检测精度可以超过90%,并且F度量值为80%。
@article{李浩2019基于网络流量分析的未知恶意软件检测, title={基于网络流量分析的未知恶意软件检测}, author={李浩 and 马坤 and 陈贞翔 and 赵川}, journal={济南大学学报(自然科学版)}, number={6}, year={2019}, }
针对自然语言处理的文本情感分类问题,提出一种基于集成学习的文本情感分类方法;基于微博数据的特殊性,首先对微博数据进行分词等预处理,结合词频-逆文档频率(TF-IDF)和奇异值分解(SVD)方法进行特征提取和降维,再通过堆叠泛化(stacking)集成学习的方式进行分类模型融合。结果表明,模型融合对文本情感分析的准确率达到93%,可以有效地判别微博文本的情感极性。
@article{段吉东2019基于集成学习的文本情感分类方法, title={基于集成学习的文本情感分类方法}, author={段吉东 and 刘双荣 and 马坤 and 孙润元}, journal={济南大学学报(自然科学版)}, year={2019}, }
Social network services for self-media, such as Weibo, Blog, and WeChat Public, constitute a powerful medium that allows users to publish posts every day. Due to insufficient information transparency, malicious marketing of the Internet from self-media posts imposes potential harm on society. Therefore, it is necessary to identify news with marketing intentions for life. We follow the idea of text classification to identify marketing intentions. Although there are some current methods to address intention detection, the challenge is how the feature extraction of text reflects semantic information and how to improve the time complexity and space complexity of the recognition model. To this end, this paper proposes a machine learning method to identify marketing intentions from large-scale We-Media data. First, the proposed Latent Semantic Analysis (LSI)-Word2vec model can reflect the semantic features. Second, the decision tree model is simplified by decision tree pruning to save computing resources and reduce the time complexity. Finally, this paper examines the effects of classifier associations and uses the optimal configuration to help people efficiently identify marketing intention. Finally, the detailed experimental evaluation on several metrics shows that our approaches are effective and efficient. The F1 value can be increased by about 5%, and the running time is increased by 20%, which prove that the newly-proposed method can effectively improve the accuracy of marketing news recognition.
@Article{fi11070155, AUTHOR = {Wang, Yufeng and Liu, Shuangrong and Li, Songqian and Duan, Jidong and Hou, Zhihao and Yu, Jia and Ma, Kun}, TITLE = {Stacking-Based Ensemble Learning of Self-Media Data for Marketing Intention Detection}, JOURNAL = {Future Internet}, VOLUME = {11}, YEAR = {2019}, NUMBER = {7}, ARTICLE-NUMBER = {155}, URL = {https://www.mdpi.com/1999-5903/11/7/155}, ISSN = {1999-5903}, ABSTRACT = {Social network services for self-media, such as Weibo, Blog, and WeChat Public, constitute a powerful medium that allows users to publish posts every day. Due to insufficient information transparency, malicious marketing of the Internet from self-media posts imposes potential harm on society. Therefore, it is necessary to identify news with marketing intentions for life. We follow the idea of text classification to identify marketing intentions. Although there are some current methods to address intention detection, the challenge is how the feature extraction of text reflects semantic information and how to improve the time complexity and space complexity of the recognition model. To this end, this paper proposes a machine learning method to identify marketing intentions from large-scale We-Media data. First, the proposed Latent Semantic Analysis (LSI)-Word2vec model can reflect the semantic features. Second, the decision tree model is simplified by decision tree pruning to save computing resources and reduce the time complexity. Finally, this paper examines the effects of classifier associations and uses the optimal configuration to help people efficiently identify marketing intention. Finally, the detailed experimental evaluation on several metrics shows that our approaches are effective and efficient. The F1 value can be increased by about 5%, and the running time is increased by 20%, which prove that the newly-proposed method can effectively improve the accuracy of marketing news recognition.}, DOI = {10.3390/fi11070155} }
张家豪,自助点餐系统,2019
刘方涵,文献管理系统,2019
李松谦, 办公OA系统, 2018
瞿浩、杨哲, 济南大学官方网站, 2018
瞿浩土木建筑学院官方网站, 2018
瞿浩Jayce, 2018
瞿浩Programer Chrome Tab, 2018
瞿浩经英教育, 2018
瞿浩水墨人生商城, 2018
瞿浩, 校乡汇, 2016
李松谦2017年济南大学学工在线, 2018
李松谦、牛学蔚, 2017
李松谦2017届迎新系统, 2018
李松谦2017届学工在线纳新系统, 2018
李松谦2018年济南大学学工在线, 2018
杨哲, 山东大学车辆管理系统, 2017
杨哲, 济南大学官网, 2017
杨哲, 济南大学信息学院官网, 2017
杨哲, 大数据驱动创新方法工作平台, 2017
杨哲, 趣打印系统, 2017
牛学蔚, 晒米约拍平台, 2017
杨哲, 趣打印系统, 2017
姚树巍, 学生在线互助答疑系统, 2017
杨哲, 向素, 2016
杨哲, C.D.Cafe点餐系统, 微信号cdcafe_chin, 2016
杨哲, C.D.外卖系统-米优私厨, 微信号miyousichu, 2016
杨哲, 食全时美外卖, 微信号SQSMwaimai, 2016
杨哲, 以勒留学, 2016
杨哲, 土建学院在线手册, 2016
杨哲, 恒信微金CRM(北京玖富财富济南分部)测试版, 2016
杨哲, 吉林省镇赉县文化馆, 2016
纪笑难, 静态博客, 2016
纪笑难, 斗图网, 2016
纪笑难, 济南大学物业中心, 2016
纪笑难, 济南大学合作发展处, 2016
李昶昕, 新浪云CMS博客, 2016
瞿浩, 济南大学学工处, 2016
瞿浩, 基于Node.js的博客 Blog of Houser, 2016
瞿浩, About me, 2016
瞿浩, 济南大学土木建筑学院, 2016
瞿浩, 基于社交网络的社团管理服务平台, 2016
瞿浩, 基于社交网络的社团管理服务平台, 2016
Zhe Yang, Logistic Duty Management, 2015
Zhe Yang, Youth Literature, 2015
Zhe Yang, Student Online, 2014
Zhe Yang, USLab, 2014
Zhe Yang, Information Disclosure of UJN, 2014
Zhe Yang, Organization Department of UJN, 2013
Zhe Yang, Student Union of UJN, 2013
Zhe Yang, Yue Dong, 2015
Zhe Yang, Yue Qi, 2015
Zhe Yang, Sheng Shi, 2015
Zhe Yang, San Zhong, 2015
Zhe Yang, 988 Shopping, 2015
Zhe Yang, San Zhong, 2015
Zhe Yang, Blog of Zhe Yang, 2015
Zhe Yang, Internet Navigation, 2007
Zhe Yang, Zhongqi Data, 2010
Zhe Yang, Faxinbao, 2007
Zhe Yang, Lvtian, 2014
Zhe Yang, xiaocheng Blog, 2014
Zhe Yang, Jinxing, 2014
Zhe Yang, Dianti, 2014
Zhe Yang, Longao, 2014
Zhe Yang, Baihe, 2014
Zhe Yang, Jinmingtang, 2014
Zhe Yang, Lvwei, 2014
Zhe Yang, San Zhong, 2015
Zhe Yang, Dance Association of UJN, 2015
Zhe Yang, Logistic Management, 2014
Shuwei Yao, Online Courseware Management System, 2015
Zhe Yang and Shuwei Yao, Achievement Assistant, 2015
Zhe Yang and Shuwei Yao, purchase of second-hand unused goods, 2014
Zijie Tang, Youzi Fan, 2015
Zijie Tang, Information Youth of UJN, 2015
Zijie Tang, School of Political Science and Public Administration of UJN, 2015
Zijie Tang, Cultural Centre of UJN, 2015
Zijie Tang, UJNCMS, 2015
Zijie Tang, Student Union of UJN, 2015
Zijie Tang, DI JIANG, 2015
Zijie Tang, @ Me, 2013-2015 微电影【爱情概率论】
Zijie Tang, @ Me (ujn), 2013-2015
Zijie Tang, RSS Cube, 2013
Zijie Tang, UJN Facemash, 2013
Zijie Tang, Love Wall, 2014