文国兴简介

来源:3003com必赢发布时间:2020-07-19浏览次数:4348

、基本情况

    文国兴,男,19772月出生,博士研究生,教授,滨州市五一劳动奖章获得者,山东省有突出贡献的中青年专家。主讲课程:高等数学、概率统计、计算方法等课程。

      20114月获得辽宁工业大学理学硕士学位,201411月获得澳门大学-科技学院博士学位,20159-20169月,在新加坡国立大学从事博士后工作。主要研究兴趣集中在:非线性系统自适应控制、多智能体控制、优化控制、强化学习、神经网络、模糊系统等领域。近年来,以第一作者兼通讯作者身份发表SCI学术论文20篇,其中2篇被收录为全球ESI高被引论文,13篇发表在中科院分区一区期刊IEEE Transaction系列期刊,2篇发表在中科院分区二区期刊。以合作作者身份发表SCI学术论文20余篇,4篇收录为全球ESI高被引论文。

  近年来,主持山东省教育厅人才项目,获得资助经费200万元;主持山东省自然科学基金面上项目1项,获得资助经费14万元;201912月获得山东省高等学校优秀科研成果奖三等奖1项,位次1/2201712月获得滨州市自然科学优秀学术成果奖一等奖1项,位次1/3

二、 职称晋升

1. 2021.12-至今    教授

2. 2019.12-2021.12 副教授

3. 2016.9-2019.12  讲师

三、荣誉称号

  1. 2020.5              授予单位:滨州市总工会          授予称号:滨州市“五一劳动奖章”获得者

  2. 2020.2-2025.1 授予单位:山东省人民政府        授予称号:山东省有突出贡献的中青年专家

  3. 2019.6-2022.6 授予单位:滨州学院              授予称号:校聘教授,         

  4. 2018.3-2021.2 授予单位:滨州学院              授予称号:“聚英计划”, 第二层次,  

四、 获奖情况

  1. 文国兴(1/2非线性多智能体的一致控制,山东省高等学校优秀科研成果奖,三等奖,2019.12(文国兴,冯君)

  2. 文国兴(1/1; Optimized Backstepping for Tracking Control of Strict Feedback Systems2019年度滨州学院优秀科研成果奖,二等奖, 2019.6(文国兴);

  3. 文国兴(1/2);Neural network-based adaptive leader-following consensus control for a class of nonlinear multiagent state-delay systems, 2018年滨州学院优秀科研成果奖,二等奖, 2018.5(文国兴,陈俊龙);

  4. 文国兴(1/3基于神经网络的二阶非线性多智能体系统的自适应领导-跟随者一致控制滨州市十三届自然科学优秀学术成果奖,一等奖, 2017.12(文国兴,陈俊龙,刘艳军);

五、 主持项目

  1. 国家级项目,国家自然科学基金(面上项目),非线性严格反馈系统的自适应强化学习优化控制,基金号:62073045,资助金额:58万,位次:1/8, 日期: 2021.1-2024.12

  2. 山东省教育厅人才项目,山东省高等学校青年创新团队,无人机的优化自主控制研究创新团队,资助金额:200万,位次:学科带头人日期: 2019.10-2021.12

  3. 省级项目,山东省自然科学基金(面上项目),多智能体编队的优化控制,基金号:ZR2018MF015,资助金额:14万,位次:1/7, 日期: 2018.3-2020.12

  4. 校级,滨州学院博士启动基金,非线性多智能体的一致控制,基金号:2016Y14,资助金额:20万,位次:1/4, 日期: 2016.3-2019.2

六、代表性论著

  1. Guoxing Wen*, Bin Li, “Optimized Leader-Follower Consensus Control Using Reinforcement Learning for a Class of Second-Order Nonlinear Multi-Agent Systems”, IEEE Transactions on Systems, Man and Cybernetics: Systems, vol.  , no. , pp. , DOI: 10.1109/TSMC.2021.3130070.

  2. Guoxing Wen*, Wei Hao, Weiwei Feng and Kaizhou Gao,  “Optimized Backstepping Tracking Control Using Reinforcement Learning for Quadrotor Unmanned Aerial Vehicle System”, IEEE Transactions on Systems, Man and Cybernetics: Systems, vol.  , no. , pp. , DOI: 10.1109/TSMC.2021.3112688.

  3. Guoxing Wen*, C. L. Philip Chen, “Optimized Backstepping Consensus Control Using Reinforcement Learning for a Class of Nonlinear Strict-Feedback-Dynamic Multi-Agent Systems”, IEEE Transactions on Neural Networks and Learning Systems, vol.  , no. , pp. , DOI: 10.1109/TNNLS.2021.3105548.

  4. Guoxing Wen*, Liguang Xu, Bin Li, “Optimized Backstepping Tracking Control Using Reinforcement Learning for a Class of Stochastic Nonlinear Strict-Feedback Systems”, IEEE Transactions on Neural Networks and Learning Systems, vol.  , no. , pp. , DOI: 10.1109/TNNLS.2021.3105176.

  5. Guoxing Wen*, C. L. Philip Chen, Shuzhi Sam Ge, “Simplified Optimized Backstepping Control for a Class of Nonlinear Strict-Feedback Systems with Unknown Dynamic Functions”, IEEE Transactions on Cybernetics,  vol. 51, no. 9, pp. 4567-4580, Sept. 2021, DOI: 10.1109/TCYB.2020.3002108.

  6. Guoxing Wen*, Chenyang Zhang, Ping Hu, Yang Cui, "Adaptive Neural Network Leader-Follower Formation Control for a Class of Second-Order Nonlinear Multi-Agent Systems with Unknown Dynamics," in IEEE Access, vol. 8, pp. 148149-148156, Oct. 2020, DOI: 10.1109/ACCESS.2020.3015957.

  7. Guoxing Wen*, C. L. Philip Chen, Bin Li, “Optimized Formation Control Using SimplifiedReinforcement Learning for a Class ofMultiagent Systems with Unknown Dynamics”,IEEE Transactions on Industrial Electronics, vol. 67, no. 9, pp. 7879 – 7888, Sept. 2020, DOI:10.1109/TIE.2019.2946545.

  8. Guoxing Wen*, C. L. Philip Chen, Wei Nian Li, “Simplified optimized control using reinforcement learning algorithm for a class of stochastic nonlinear systems”, Information Sciences, vol. 517, pp. 230–243, May 2020, DOI: 10.1016/j.ins.2019.12.039.

  9. Guoxing Wen*, C. L. Philip Chen, Shuzhi Sam Ge, Hongli Yang, Xiaoguang Liu “Optimized adaptive nonlinear tracking control using actor-critic reinforcement learning strategy”,  IEEE Transactions on Industrial Informatics, vol. 15, no. 9, pp. 4969-4977, Sep. 2019, DOI: 10.1109/TII.2019.2894282.

  10. Guoxing Wen*, Shuzhi Sam Ge, C. L. Philip Chen, Fangwen Tu, Shengnan Wang, “AdaptiveTracking Control of Surface Vessel Using Optimized Backstepping Technique”,  IEEE Transactions on Cybernetics,  vol. 49, no. 9, pp. 3420 - 3431, Sep. 2019, DOI: 10.1109/TCYB.2018.2844177.

  11. Guoxing Wen*, C. L. Philip Chen, Hui Dou, Hongli Yang, Chunfang Liu “Formation Control with Obstacle Avoidance of Second-Order Multi-Agent Systems under Directed Communication Topology”, Science China Information Science, vol. 62, no. 9, pp. 192205:1-192205:14, July 2019, DOI:CNKI:SUN:JFXG.0.2019-09-011.

  12. Guoxing Wen*, C. L. Philip Chen, Jun. Feng and Ning. Zhou, "Optimized Multi-Agent Formation Control Based on Identifier-Actor-Critic Reinforcement Learning Algorithm", in IEEE Transactions on Fuzzy Systems, vol. 26, no. 5, pp.2719 - 2731, Oct. 2018, DOI: 10.1109/TFUZZ.2017.2787561.

  13. Guoxing Wen*, Shuzhi Sam Ge, Fangwen Tu, "Optimized Backstepping for Tracking Control of Strict Feedback Systems”, IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 8, pp. 3850-3862, Aug. 2018, DOI: 10.1109/TNNLS.2018.2803726.

  14. Guoxing Wen*, C. L. Philip Chen, Yan-Jun Liu, "Formation Control with Obstacle Avoidance for a class of Stochastic Multi-Agent Systems", IEEE Transactions on Industrial Electronics, vol. 65, no. 7, pp. 5847-5855, Jul. 2018, DOI: 10.1109/TIE.2017.2782229.

  15. Guoxing Wen*, C. L. Philip Chen, Yan-Jun Liu, Zhi Liu, “Neural-Network-Based Adaptive Leader-Following Consensus Control for a Class of Nonlinear Multi-Agent State-Delay Systems”,  IEEE Transactions on Cybernetics,  vol. 47, no. 8, pp. 2151-2160, Aug. 2017, DOI: 10.1109/TCYB.2016.2608499(高被引论文).

  16. Guoxing Wen*, Shuzhi Sam Ge, Fangwen Tu, “Artificial Potential-Based Adaptive H∞ Synchronized Tracking Control for Accommodation Vessel”, IEEE Transactions on Industrial Electronics, vol. 64, no. 7, pp. 5640-5647, July 2017, DOI: 10.1109/TIE.2017.2677330.

  17. Guo-Xing Wen*,  C. L. Philip Chen,  Yan-Jun Liu, Zhi Liu, “Neural-network-based adaptive leader-following  consensus control for second-order non-linear multi-agent systems”, IET Control Theory & Applications, Vol. 13, no. 9, pp. 1927-1934,  Aug. 2015, DOI: 10.1049/iet-cta.2014.1319(高被引论文).

  18. Guo-Xing Wen, Yan-Jun Liu and C. L. Philip Chen, “Direct adaptive robust NN control for a class of discrete-time nonlinear strict-feedback SISO systems”, Neural Computing and Applications, Vol. 21, No.6, pp.1423-1431, Sep. 2012, DOI: 10.1007/s00521-011-0596-4.

  19. Guo-Xing Wen, Yan-Jun Liu, Adaptive Fuzzy-Neural Tracking Control for Uncertain Non-linear Discrete-Time Systems in the NARMAX form, Nonlinear Dynamics, Vol. 66, No. 4, pp. 745-753, Feb. 2011, DOI: 10.1007/s11071-011-9947-z.

  20. Guo-Xing Wen, Yan-Jun Liu, Shao-Cheng Tong, Xiao-Li Li, Adaptive neural output feedback control of nonlinear discrete-time systems, Nonlinear Dynamics, Vol.65, No.1-2, pp.65-75, Nov. 2010, DOI: 10.1007/s11071-010-9874-4.

七、联系方式

E-mailwengx_bzu@hotmail.com