PT AU BA BE GP AF BF CA TI SO SE BS LA DT CT CY CL SP HO DE ID AB C1 RP EM RI OI FU FX CR NR TC Z9 U1 U2 PU PI PA SN EI BN J9 JI PD PY VL IS PN SU SI MA BP EP AR DI D2 EA PG WC SC GA UT PM OA HC HP DA C Xu, DP; Yuan, SH; Zhang, L; Wu, XT Abe, N; Liu, H; Pu, C; Hu, X; Ahmed, N; Qiao, M; Song, Y; Kossmann, D; Liu, B; Lee, K; Tang, J; He, J; Saltz, J Xu, Depeng; Yuan, Shuhan; Zhang, Lu; Wu, Xintao FairGAN: Fairness-aware Generative Adversarial Networks 2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) IEEE International Conference on Big Data English Proceedings Paper IEEE International Conference on Big Data (Big Data) DEC 10-13, 2018 Seattle, WA IEEE, IEEE Comp Soc, Expedia Grp, Baidu, Squirrel AI Learning, Ankura, Springer Fairness-aware learning is increasingly important in data mining. Discrimination prevention aims to prevent discrimination in the training data before it is used to conduct predictive analysis. In this paper, we focus on fair data generation that ensures the generated data is discrimination free. Inspired by generative adversarial networks (GAN), we present fairness-aware generative adversarial networks, called FairGAN, which are able to learn a generator producing fair data and also preserving good data utility. Compared with the naive fair data generation models, FairGAN further ensures the classifiers which are trained on generated data can achieve fair classification on real data. Experiments on a real dataset show the effectiveness of FairGAN. [Xu, Depeng; Yuan, Shuhan; Zhang, Lu; Wu, Xintao] Univ Arkansas, Fayetteville, AR 72701 USA Xu, DP (corresponding author), Univ Arkansas, Fayetteville, AR 72701 USA. depengxu@uark.edu; sy005@uark.edu; lz006@uark.edu; xintaowu@uark.edu Xu, Depeng/0000-0002-0371-1815 NSFNational Science Foundation (NSF) [1564250, 1646654, 1841119] This work was supported in part by NSF 1564250, 1646654 and 1841119. Beutel A., 2017, FAT ML; Binns R, 2017, ARXIV171203586CS; Calders T., 2009, ICDM WORKSH; Choi Edward, 2017, MLHC; Dheeru D., 2017, UCI MACHINE LEARNING; Dwork Cynthia, 2011, ARXIV11043913CS; Edwards H., 2015, ARXIV151105897CSSTAT; Feldman M., 2015, KDD; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Hardt M., 2016, NIPS; Joseph M., 2016, NIPS; Kamiran F., 2009, CONTR COMM 2009 2 IN; Kamiran F., 2010, ICDM; Kamiran F, 2012, KNOWL INF SYST, V33, P1, DOI 10.1007/s10115-011-0463-8; Kamishima T., 2011, ICDM WORKSH; Kingma D. P., 2015, P 3 INT C LEARN REPR; Madras D., 2018, ARXIV180206309CSSTAT; Mirza M., 2014, ARXIV14111784; Radford A., 2015, ARXIV PREPRINT ARXIV; Wu Y., 2016, DSAA; Zafar M. B., 2017, AISTATS; Zhang B. H., 2018, AIES; Zhang L., 2017, KDD; Zhang Lu, 2017, IJCAI 24 15 16 0 0 IEEE NEW YORK 345 E 47TH ST, NEW YORK, NY 10017 USA 2639-1589 978-1-5386-5035-6 IEEE INT CONF BIG DA 2018 570 575 6 Computer Science, Artificial Intelligence; Computer Science, Information Systems; Computer Science, Theory & Methods Computer Science BM7WO WOS:000468499300074 Green Submitted 2021-09-15 J van Steenkiste, S; Kurach, K; Schmidhuber, J; Gelly, S Steenkiste, Sjoerd van; Kurach, Karol; Schmidhuber, Juergen; Gelly, Sylvain Investigating object compositionality in Generative Adversarial Networks NEURAL NETWORKS English Article Generative Adversarial Networks; Objects; Compositionality; Generative modeling; Instance segmentation; Representation learning Deep generative models seek to recover the process with which the observed data was generated. They may be used to synthesize new samples or to subsequently extract representations. Successful approaches in the domain of images are driven by several core inductive biases. However, a bias to account for the compositional way in which humans structure a visual scene in terms of objects has frequently been overlooked. In this work, we investigate object compositionality as an inductive bias for Generative Adversarial Networks (GANs). We present a minimal modification of a standard generator to incorporate this inductive bias and find that it reliably learns to generate images as compositions of objects. Using this general design as a backbone, we then propose two useful extensions to incorporate dependencies among objects and background. We extensively evaluate our approach on several multi-object image datasets and highlight the merits of incorporating structure for representation learning purposes. In particular, we find that our structured GANs are better at generating multi-object images that are more faithful to the reference distribution. More so, we demonstrate how, by leveraging the structure of the learned generative process, one can 'invert' the learned generative model to perform unsupervised instance segmentation. On the challenging CLEVR dataset, it is shown how our approach is able to improve over other recent purely unsupervised object-centric approaches to image generation. (C) 2020 Elsevier Ltd. All rights reserved. [Steenkiste, Sjoerd van; Schmidhuber, Juergen] SUPSI & USI, IDSIA, Via Cantonale 2C, CH-6928 Manno, Switzerland; [Kurach, Karol; Gelly, Sylvain] Google Brain, Brandschenkestr 110, CH-8002 Zurich, Switzerland van Steenkiste, S (corresponding author), SUPSI & USI, IDSIA, Via Cantonale 2C, CH-6928 Manno, Switzerland. sjoerd@idsia.ch; kkurach@google.com; juergen@idsia.ch; sylvaingelly@google.com van Steenkiste, Sjoerd/0000-0003-4324-3021 Swiss National Science FoundationSwiss National Science Foundation (SNSF)European Commission [200021_165675/1]; IBMInternational Business Machines (IBM) The authors wish to thank Damien Vincent, Alexander Kolesnikov, Olivier Bachem, Klaus Greff, and Paulo Rauber for helpful comments and constructive feedback. The authors are grateful to Marcin Michalski and Pierre Ruyssen for their technical support. This research was in part supported by the Swiss National Science Foundation grant 200021_165675/1, and by hardware donations from NVIDIA Corporation as part of the Pioneers of AI Research award, and by IBM. Arandjelovic R., 2019, ARXIV190511369; Arjovsky M., 2017, ARXIV170107875, P214; Azadi S., 2019, ARXIV180707560; Ba J.L., 2016, ARXIV PREPRINT ARXIV; Battaglia P. W., 2018, ARXIV180601261; Battaglia PW, 2013, P NATL ACAD SCI USA; Bengio Y, 2013, IEEE T PATTERN ANAL, V35, P1798, DOI 10.1109/TPAMI.2013.50; Bielski A., 2019, ADV NEURAL INFORM PR, V32, P7254; Chen M., 2019, P ADV NEUR INF PROC, V32, P12705; Chen X, 2016, ADV NEUR IN, V29; Dinh L., 2017, 5 INT C LEARN REPR 5 INT C LEARN REPR; Donahue J., 2017, 5 INT C LEARN REPR; Dumoulin V., 2017, 5 INT C LEARN REPR; Eslami S. M. A., 2016, ADV NEURAL INFORM PR, P3233; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Greff K., 2019, INT C MACH LEARN, P2424; Greff K., 2017, ADV NEURAL INFORM PR, P6691; Greff Klaus, 2016, ADV NEURAL INFORM PR, P4484; Gregor K, 2015, PR MACH LEARN RES, V37, P1462; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; HEUSEL M., 2017, ADV NEURAL INFORM PR, P6626, DOI DOI 10.5555/3295222.3295408; Higgins I., 2017, 5 INT C LERAN REPR; Hinz T., 2019, INT C LEARN REPR; HUBERT L, 1985, J CLASSIF, V2, P193, DOI 10.1007/BF01908075; Im D. J., 2016, ARXIV160205110; Isola P, 2017, PROC CVPR IEEE, P5967, DOI 10.1109/CVPR.2017.632; Janner M., 2019, INT C LEARN REPR; Johnson J, 2018, PROC CVPR IEEE, P1219, DOI 10.1109/CVPR.2018.00133; Johnson Justin, 2017, P IEEE C COMP VIS PA, P2901; Kingma D. P., 2014, 2 INT C LEARN REPR; Kingma D. P., 2015, P 3 INT C LEARN REPR; Kosiorek A. R., 2018, ADV NEURAL INFORM PR; Krizhevsky A, 2009, LEARNING MULTIPLE LA; Kurach K., 2019, INT C MACH LEARN; Kwak Hanock, 2016, ARXIV160705387; Le Roux N, 2011, NEURAL COMPUT, V23, P593, DOI 10.1162/NECO_a_00086; Lecun Y, 1998, P IEEE, V86, P2278, DOI 10.1109/5.726791; Lin CH, 2018, PROC CVPR IEEE, P9455, DOI 10.1109/CVPR.2018.00985; Lucic Mario, 2018, ADV NEURAL INFORM PR; Nash C., 2017, NIPS WORKSH LEARN DI; Oord A. v. d., 2016, P 33 INT C MACH LEAR; Radford A., 2015, ARXIV PREPRINT ARXIV; Santoro A., 2017, ADV NEURAL INFORM PR, P4967; SCHMIDHUBER J, 1992, NEURAL COMPUT, V4, P863, DOI 10.1162/neco.1992.4.6.863; Schmidhuber J., 2020, NEURAL NETWORKS; Schmidhuber J., 1990, FKI12690 FKI; Spampinato C., 2019, INT J COMPUT VISION, P1; Spelke ES, 2007, DEVELOPMENTAL SCI, V10, P89, DOI 10.1111/j.1467-7687.2007.00569.x; van Steenkiste S., 2018, INT C LEARN REPR; van Steenkiste S., 2018, NEUR WORKSH MOD PHYS; Vaswani A., 2017, ADV NEURAL INFORM PR, V30, P5998, DOI DOI 10.5555/3295222.3295349; Xu K., 2018, ARXIV180703877; Yang J., 2017, 5 INT C LERAN REPR; Yoshida Y.(, 2018, INT C LEARN REPR; Zambaldi V., 2019, INT C LEARN REPR 55 2 2 0 5 PERGAMON-ELSEVIER SCIENCE LTD OXFORD THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND 0893-6080 1879-2782 NEURAL NETWORKS Neural Netw. OCT 2020 130 309 325 10.1016/j.neunet.2020.07.007 17 Computer Science, Artificial Intelligence; Neurosciences Computer Science; Neurosciences & Neurology NM0AP WOS:000567768400003 32736226 Green Submitted 2021-09-15 C Abusitta, A; Aimeur, E; Wahab, OA DeGiacomo, G; Catala, A; Dilkina, B; Milano, M; Barro, S; Bugarin, A; Lang, J Abusitta, Adel; Aimeur, Esma; Wahab, Omar Abdel Generative Adversarial Networks for Mitigating Biases in Machine Learning Systems ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE Frontiers in Artificial Intelligence and Applications English Proceedings Paper 24th European Conference on Artificial Intelligence (ECAI) AUG 29-SEP 08, 2020 European Assoc Artificial Intelligence, ELECTR NETWORK Spanish Assoc Artificial Intelligence, Univ Santiago Compostela, Res Ctr Intelligent Technologies European Assoc Artificial Intelligence In this paper, we propose a new framework for mitigating biases in machine learning systems. The problem of the existing mitigation approaches is that they are model-oriented in the sense that they focus on tuning the training algorithms to produce fair results, while overlooking the fact that the training data can itself be the main reason for biased outcomes. Technically speaking, two essential limitations can be found in such model-based approaches: 1) the mitigation cannot be achieved without degrading the accuracy of the machine learning models, and 2) when the data used for training are largely biased, the training time automatically increases so as to find suitable learning parameters that help produce fair results. To address these shortcomings, we propose in this work a new framework that can largely mitigate the biases and discriminations in machine learning systems while at the same time enhancing the prediction accuracy of these systems. The proposed framework is based on conditional Generative Adversarial Networks (cGANs), which are used to generate new synthetic fair data with selective properties from the original data. We also propose a framework for analyzing data biases, which is important for understanding the amount and type of data that need to be synthetically sampled and labeled for each population group. Experimental results show that the proposed solution can efficiently mitigate different types of biases, while at the same time enhance the prediction accuracy of the underlying machine learning model. [Abusitta, Adel] McGill Univ, Montreal, PQ, Canada; [Aimeur, Esma] Univ Montreal, Montreal, PQ, Canada; [Wahab, Omar Abdel] Univ Quebec Outaouais, Gatineau, PQ, Canada Abusitta, A (corresponding author), McGill Univ, Montreal, PQ, Canada. adel.abusitta@mcgill.ca; aimeur@iro.umontreal.ca; omar.abdulwahab@uqo.ca Natural Sciences and Engineering Research Council of CanadaNatural Sciences and Engineering Research Council of Canada (NSERC)CGIAR The financial support of the Natural Sciences and Engineering Research Council of Canada is gratefully acknowledged. We also would like to acknowledge Dr. Gilles Brassard (University of Montreal), Dr. Kimiz Dalkir (McGill University), Younes Driouiche (Mila), Alexis Tremblay, Amine Belabed and Rim Ben Salem for helpful discussions. Abusitta A, 2018, COMPUT NETW, V145, P52, DOI 10.1016/j.comnet.2018.08.009; Abusitta A, 2018, J CLOUD COMPUT-ADV S, V7, DOI 10.1186/s13677-018-0109-4; Abusitta Adel, 2018, P 21 C INN CLOUDS IN, P1; Abusitta Adel, 2019, FUTURE GENERATION CO; Agarwal A., 2018, ARXIV180302453; [Anonymous], 2019, COMPAS RECIDIVISM RI; [Anonymous], 2019, ADIENCE DATA SET; [Anonymous], 2019, MACHINE LEARNING BIA; [Anonymous], 2019, FAIRNESS BIAS COMPAS; Bengio Y., 2006, P ADV NEUR INF PROC, V19, P153; Brackey Adrienne, 2019, THESIS; Brennan William Dieterich Tim, 2019, CORRECTIONAL OFFENDE; Calmon F., 2017, ADV NEURAL INFORM PR, P3992; Camino R.D., 2018, ARXIV180701202; Campolo A., 2017, AI NOW 2017 REPORT; Celis L Elisa, 2019, ARXIV190110443; Challen R, 2019, BMJ QUAL SAF, V28, P231, DOI 10.1136/bmjqs-2018-008370; Chen X., 2018, ARXIV180201765; Doersch Carl, 2016, ARXIV160605908; Feldman M, 2015, KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, P259, DOI 10.1145/2783258.2783311; Goh G., 2016, ADV NEURAL INFORM PR, P2415; Goodfellow I, 2016, ARXIV170100160; Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; Halabi T, 2019, INT CONF COMPUT NETW, P370, DOI 10.1109/ICCNC.2019.8685509; Halabi T, 2018, 2018 5TH IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND CLOUD COMPUTING (IEEE CSCLOUD 2018) / 2018 4TH IEEE INTERNATIONAL CONFERENCE ON EDGE COMPUTING AND SCALABLE CLOUD (IEEE EDGECOM 2018), P83, DOI 10.1109/CSCloud/EdgeCom.2018.00023; Hardt M, 2016, ADV NEURAL INFORM PR; Jang E, 2016, P BAYES DEEP LEARN W; Kaiming He, 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P770, DOI 10.1109/CVPR.2016.90; Kamiran F, 2012, KNOWL INF SYST, V33, P1, DOI 10.1007/s10115-011-0463-8; Karani Dhruvil, 2019, INTRO WORD EMBEDDING; Kenney Matthew, 2019, AMAZON REKOGNITION; Khosla A, 2012, LECT NOTES COMPUT SC, V7572, P158, DOI 10.1007/978-3-642-33718-5_12; Kingma D. P., 2014, NIPS, P3581; Kivinen J, 1997, INFORM COMPUT, V132, P1, DOI 10.1006/inco.1996.2612; Krasanakis E, 2018, WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), P853, DOI 10.1145/3178876.3186133; LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539; Louppe G., 2017, ADV NEURAL INFORM PR, P981; Maddison C.J., 2016, ARXIV161100712; Madras D., 2018, ARXIV180206309; Mirza M., 2014, ARXIV14111784; Mitchell T. M., 1980, NEED BIASES LEARNING; ONEILL B, 1987, P NATL ACAD SCI USA, V84, P2106, DOI 10.1073/pnas.84.7.2106; Pleiss Geoff, 2017, P C ADV NEUR INF PRO, V30, P5680; Rodriguez P, 2017, PATTERN RECOGN, V72, P563, DOI 10.1016/j.patcog.2017.06.028; Ruder S., 2019, P 2019 C N AM CHAPT, P15, DOI [10.18653/v1/N19-5004, DOI 10.18653/V1/N19-5004]; Singhi SK, 2006, P 23 INT C MACH LEAR, P849; Tonk Stijn, 2019, FAIRNESS ML ADVERSAR; Torralba A, 2011, PROC CVPR IEEE, P1521, DOI 10.1109/CVPR.2011.5995347; Wakabayashi Daisuke, 2019, GOOGLE FINDS ITS UND; Woodworth Blake, 2017, ARXIV170206081; Xu DP, 2018, IEEE INT CONF BIG DA, P570, DOI 10.1109/BigData.2018.8622525; Yen SJ, 2006, LECT NOTES CONTR INF, V344, P731; Zhang BH, 2018, PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), P335, DOI 10.1145/3278721.3278779 55 0 0 0 0 IOS PRESS AMSTERDAM NIEUWE HEMWEG 6B, 1013 BG AMSTERDAM, NETHERLANDS 0922-6389 1879-8314 978-1-64368-101-6; 978-1-64368-100-9 FRONT ARTIF INTEL AP 2020 325 937 944 10.3233/FAIA200186 8 Computer Science, Artificial Intelligence Computer Science BR4CH WOS:000650971301024 2021-09-15 J Sattigeri, P; Hoffman, SC; Chenthamarakshan, V; Varshney, KR Sattigeri, P.; Hoffman, S. C.; Chenthamarakshan, V; Varshney, K. R. Fairness GAN: Generating datasets with fairness properties using a generative adversarial network IBM JOURNAL OF RESEARCH AND DEVELOPMENT English Article OF-THE-ART We introduce the Fairness GAN (generative adversarial network), an approach for generating a dataset that is plausibly similar to a given multimedia dataset, but is more fair with respect to protected attributes in decision making. We propose a novel auxiliary classifier GAN that strives for demographic parity or equality of opportunity and show empirical results on several datasets, including the CelebFaces Attributes (CelebA) dataset, the Quick, Draw! dataset, and a dataset of soccer player images and the offenses for which they were called. The proposed formulation is well suited to absorbing unlabeled data; we leverage this to augment the soccer dataset with the much larger CelebA dataset. The methodology tends to improve demographic parity and equality of opportunity while generating plausible images. [Sattigeri, P.; Hoffman, S. C.; Chenthamarakshan, V; Varshney, K. R.] IBM Res, Yorktown Hts, NY 10598 USA Sattigeri, P (corresponding author), IBM Res, Yorktown Hts, NY 10598 USA. pnattig@us.ibm.com; shoffman@ibm.com; ecvijil@us.ibm.com; kvarshn@us.ibm.com Adel T, 2019, THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, P2412; Beutel A., 2017, P WORKSH FAIRN ACC T; Bohlen M., 2017, ARXIV171108801; Chandler S., 2017, AI CHATBOT WILL HIRE; d'Alessandro B, 2017, BIG DATA-US, V5, P120, DOI 10.1089/big.2016.0048; Dumoulin V., 2017, P INT C LEARN REP; Edwards H., 2016, P INT C LEARN REP; ELAZAR Y, 2018, EMNLP; Friedler SA, 2019, FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, P329, DOI 10.1145/3287560.3287589; Goodfellow I, 2016, ARXIV170100160; Gu S., 2017, P INT C LEARN REP; Gulrajani I, 2017, IMPROVED TRAINING WA, DOI DOI 10.5555/3295222.3295327; Kaiming He, 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P770, DOI 10.1109/CVPR.2016.90; Kamiran F, 2012, KNOWL INF SYST, V33, P1, DOI 10.1007/s10115-011-0463-8; Lim Jae Hyun, 2017, ARXIV170502894; Liu ZW, 2015, IEEE I CONF COMP VIS, P3730, DOI 10.1109/ICCV.2015.425; Louppe G., 2017, ADV NEURAL INFORM PR, P981; Lu Y., 2017, ARXIV170509966; Maddison C. J., 2017, P INT C LEARN REP; Madras D, 2018, PR MACH LEARN RES, V80; Miyato T., 2018, P INT C LEARN REP; Odena A, 2017, PR MACH LEARN RES, V70; Perarnau G., 2016, P NIPS WORKSH ADV TR; Perelman L, 2014, ASSESS WRIT, V21, P104, DOI 10.1016/j.asw.2014.05.001; Shahani A., 2015, NOW ALGORITHMS ARE D; Shermis MD, 2014, ASSESS WRIT, V20, P53, DOI 10.1016/j.asw.2013.04.001; Silberzahn R., 2017, ADV METHODS PRACT PS, V1, P337; TailSpectrum, 2016, QUICKDR GOOGL DIDNT; TURK M, 1991, J COGNITIVE NEUROSCI, V3, P71, DOI 10.1162/jocn.1991.3.1.71; Wadsworth C., 2018, P WORKSH FAIRN ACC T; Williams BA, 2018, J INFORM POLICY, V8, P78, DOI 10.5325/jinfopoli.8.2018.0078; Xu DP, 2018, IEEE INT CONF BIG DA, P570, DOI 10.1109/BigData.2018.8622525; Zhang BH, 2018, PROCEEDINGS OF THE 2018 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY (AIES'18), P335, DOI 10.1145/3278721.3278779; Zhu JY, 2017, IEEE I CONF COMP VIS, P2242, DOI 10.1109/ICCV.2017.244 34 7 7 0 0 IBM CORP ARMONK 1 NEW ORCHARD ROAD, ARMONK, NY 10504 USA 0018-8646 2151-8556 IBM J RES DEV IBM J. Res. Dev. JUL-SEP 2019 63 4-5 3 10.1147/JRD.2019.2945519 9 Computer Science, Hardware & Architecture; Computer Science, Information Systems; Computer Science, Software Engineering; Computer Science, Theory & Methods Computer Science JQ4IX WOS:000498912200004 2021-09-15 C Castelle, M Assoc Comp Machinery Castelle, Michael The Social Lives of Generative Adversarial Networks FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY English Proceedings Paper ACM Conference on Fairness, Accountability, and Transparency (FAT) JAN 27-30, 2020 Barcelona, SPAIN Assoc Comp Machinery generative adversarial networks; sociological theory; habitus; bias; game theory Generative adversarial networks (GANs) are a genre of deep learning model of significant practical and theoretical interest for their facility in producing photorealistic 'fake' images which are plausibly similar, but not identical, to a corpus of training data. But from the perspective of a sociologist, the distinctive architecture of GANs is highly suggestive. First, a convolutional neural network for classification, on its own, is (at present) popularly considered to be an 'AI'; and a generative neural network is a kind of inversion of such a classification network (i.e. a layered transformation from a vector of numbers to an image, as opposed to a transformation from an image to a vector of numbers). If, then, in the training of GANs, these two 'AIs' interact with each other in a dyadic fashion, shouldn't we consider that form of learning... social? This observation can lead to some surprising associations as we compare and contrast GANs with the theories of the sociologist Pierre Bourdieu, whose concept of the so-called habitus is one which is simultaneously cognitive and social: a productive perception in which classification practices and practical action cannot be fully disentangled. Bourdieu had long been concerned with the reproduction of social stratification: his early works studied formal public schooling in France not as an egalitarian system but instead as one which unintentionally maintained the persistence of class distinctions. It was, he argued, through the cultural inculcation of an embodied and partially unconscious habitus-a "durably installed generative principle of regulated improvisations"-that, he argued, students from the upper classes are given an advantage which is only further reinforced throughout their educational trajectories. For Bourdieu, institutions of schooling instill "deeply interiorized master patterns" of behavior and thought (and classification) which in turn direct the acquisition of subsequent patterns, whose character is determined not simply by this cognitive layering but by their actual use in lived practice, especially early in childhood development. In this work I develop a productive analogy between the GAN architecture and Bourdieu's habitus, in three ways. First, I call attention to the fact that connectionist approaches and Bourdieu's theories were both conceived as revolts against rule-bound paradigms. In the 1980s, Rumelhart and McClelland used a multilayer neural network to learn the phonology of English past-tense verbs because "sometimes we don't follow the rules... language is full of exceptions to the rules"; and in the case of Bourdieu, the habitus was an answer to a long-standing question: "how can behaviour be regulated without being the product of obedience to rules?" Bourdieu strove to transgress what was then seen in the social sciences as a conceptual opposition between structure-based theories of social life and those which emphasized an embodied agency. Second, I suggest that concerns about bias and discrimination in machine learning in recent years can in part be attributed due to the increased use of ML models not just for static classification but for practical action. Similarly, the habitus for Bourdieu is simultaneously durable and transposable: its judgments may be relatively stable, but are capable of being deployed dynamically in novel and varying social situations-or what ML practitioners might call generalizability. We can thus theorize generative models (including GANs) as biased not just in their stereotyped classifications, but through their potential for actively generating new biased data. These generated actions then recursively become part of the social arena Bourdieu called the field, into which new agents are 'born' and for which they may know few alternatives. Finally, it is intriguing that GAN researchers and Bourdieu both extensively use metaphors from game theory. Goodfellow described the GAN architecture as a "two-player minimax game with value function V(G,D)", meaning that there is a single abstract function whose output value the discriminator is trying to maximize and which the generator is trying to minimize; but the dynamic nature of the GAN training process means that convergence to Nash equilibrium is nontrivial. But for Bourdieu, such a utility-based approach to artistic creation could not be more crude when compared to the social reality of art worlds: utilitarianism is, for him, "the degree zero of sociology", by which he means an isolated, inert, and amodal-and therefore not particularly sociological-starting point. Moreover, 19th-century bohemian culture was characterized primarily by its inversion of financial incentives, in which failure is a kind of success, and "selling out" (i.e. maximizing profit) worst of all; and thus the relentless optimization of neural networks may be fundamentally at odds with the "value functions" of many human artists. I conclude that deep learning, while primarily understood as a scientific and technical achievement, may also intentionally or unintentionally constitute a nascent, independent reinvention of social theory. [Castelle, Michael] Univ Warwick, Coventry, W Midlands, England Castelle, M (corresponding author), Univ Warwick, Coventry, W Midlands, England. M.Castelle.1@warwick.ac.uk 0 3 3 0 0 ASSOC COMPUTING MACHINERY NEW YORK 1515 BROADWAY, NEW YORK, NY 10036-9998 USA 978-1-4503-6936-7 2020 413 413 10.1145/3351095.3373156 1 Computer Science, Artificial Intelligence; Computer Science, Interdisciplinary Applications; Ethics Computer Science; Social Sciences - Other Topics BQ8FJ WOS:000620151400054 2021-09-15 J Bhatia, H; Paul, W; Alajaji, F; Gharesifard, B; Burlina, P Bhatia, Himesh; Paul, William; Alajaji, Fady; Gharesifard, Bahman; Burlina, Philippe Least kth-Order and Renyi Generative Adversarial Networks NEURAL COMPUTATION English Article CUTOFF RATES; DIVERGENCE; ENTROPY We investigate the use of parameterized families of information-theoretic measures to generalize the loss functions of generative adversarial networks (GANs) with the objective of improving performance. A new generator loss function, least kth-order GAN (LkGAN), is introduced, generalizing the least squares GANs (LSGANs) by using a kth-order absolute error distortion measure with k >= 1 (which recovers the LSGAN loss function when k = 2). It is shown that minimizing this generalized loss function under an (unconstrained) optimal discriminator is equivalent to minimizing the kth-order Pearson-Vajda divergence. Another novel GAN generator loss function is next proposed in terms of Renyi cross-entropy functionals with order alpha > 0, alpha not equal 1. It is demonstrated that this Renyi-centric generalized loss function, which provably reduces to the original GAN loss function as alpha -> 1, preserves the equilibrium point satisfied by the original GAN based on the Jensen-Renyi divergence, a natural extension of the Jensen-Shannon divergence. Experimental results indicate that the proposed loss functions, applied to the MNIST and CelebA data sets, under both DCGAN and StyleGAN architectures, confer performance benefits by virtue of the extra degrees of freedom provided by the parameters k and alpha, respectively. More specifically, experiments show improvements with regard to the quality of the generated images as measured by the Frechet inception distance score and training stability. While it was applied to GANs in this study, the proposed approach is generic and can be used in other applications of information theory to deep learning, for example, the issues of fairness or privacy in artificial intelligence. [Bhatia, Himesh; Alajaji, Fady; Gharesifard, Bahman] Queens Univ, Dept Math & Stat, Toronto, ON K7L 3N6, Canada; [Paul, William; Burlina, Philippe] Johns Hopkins Univ, Appl Phys Lab, Laurel, MD 20723 USA; [Burlina, Philippe] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA Bhatia, H (corresponding author), Queens Univ, Dept Math & Stat, Toronto, ON K7L 3N6, Canada. himesh.bhatia@queensu.ca; william.paul@jhuapl.edu; fa@queensu.ca; bahman.gharesifard@queensu.ca; philippe.burlina@jhuapl.edu Abadi M., 2015, TENSORFLOW LARGE SCA; Achille A., 2019, ARXIV190512213; Alajaji F, 2004, IEEE T INFORM THEORY, V50, P663, DOI 10.1109/TIT.2004.825040; Arikan E, 1996, IEEE T INFORM THEORY, V42, P99, DOI 10.1109/18.481781; Arjovsky M., 2017, ARXIV170107875, P214; BENBASSAT M, 1978, IEEE T INFORM THEORY, V24, P324, DOI 10.1109/TIT.1978.1055890; Bhatia H., 2020, ARXIV200602479; Burlina P., 2020, ARXIV200413515; CAMPBELL LL, 1965, INFORM CONTROL, V8, P423, DOI 10.1016/S0019-9958(65)90332-3; Chen L., 2018, P MACHINE LEARNING R; Chen X., 2016, ARXIV160603657; Courtade T.A., 2014, IEEE INT SYMP INFO, P2494; Creswell A, 2018, IEEE SIGNAL PROC MAG, V35, P53, DOI 10.1109/MSP.2017.2765202; CSISZAR I, 1995, IEEE T INFORM THEORY, V41, P26, DOI 10.1109/18.370121; Csiszar I., 1967, STUD SCI MATH HUNG, V2, P299; Engel E., 2011, THEOR MATH PHYS SER; Ermon Stefano, 2018, 32 AAAI C ART INT; Esposito A.R., 2020, P INT ZUR SEM INF CO, P96, DOI [10.3929/ethz-b-000403224, DOI 10.3929/ETHZ-B-000403224]; Farnia F., 2018, ADV NEURAL INFORM PR, P5248; Gal Y, 2017, P 34 INT C MACH LEAR, P2052; Garnett R., 2016, ADV NEURAL INFORM PR, V29, P1073; Goodfellow I., 2016, NIPS 2016 TUTORIAL G; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Hamza A.B., 2003, 2003 IEEE INT S INF, P257; He Y, 2003, IEEE T SIGNAL PROCES, V51, P1211, DOI 10.1109/TSP.2003.810305; Heusel M., 2017, ADV NEURAL INFORM PR, P6629; Huang C., 2018, ARXIV180705306; Karras T, 2019, PROC CVPR IEEE, P4396, DOI 10.1109/CVPR.2019.00453; Katsoulakis M., 2020, ARXIV200606625; Kingma D. P., 2014, 2 INT C LEARN REPR I; Kingma D.P., 2018, ADV NEURAL INFORM PR, V31, P10215; Kluza PA, 2020, PHYSICA A, V548, DOI 10.1016/j.physa.2019.122527; LeCun Y., 1998, MNIST HANDWRITTEN DI; Lee K, 2021, PEDIATR RES, DOI 10.1038/s41390-021-01511-9; Li C., 2019, P 22 INT C ART INT S, V89, P3302; Liu Z., 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.425; Mao X., 2017, ARXIV171206391; Mao X., 2017, P IEEE INT, P2794; Mescheder L, 2018, PR MACH LEARN RES, V80; Murphy K., 2017, P INT C LEARN REPR I, P1; Mwebaze E., 2010, P 18 EUR ART NEUR NE, P247; Nielsen F., 2019, ARXIV191200610; Nielsen F, 2014, IEEE SIGNAL PROC LET, V21, P10, DOI 10.1109/LSP.2013.2288355; Nowozin S., 2016, ARXIV160600709; Nystrom A., 2020, ARXIV191000927V2; Oord Avd., 2016, ARXIV160903499; Paul W, 2021, NEURAL COMPUT, V33, P802, DOI 10.1162/neco_a_01359; Principe J. C., 2010, INFORM SCI STAT; Rached Z., 1999, P 33 C INF SCI SYST, P613; Radford A., 2017, P 9 INT C IM GRAPH, P97; Renyi A., 1961, P 4 BERK S MATH STAT, V4, P547, DOI DOI 10.1021/JP106846B; Sarraf A., 2021, SN COMPUTER SCI, V2, P17; Sason I., 2018, ENTROPY-SWITZ, V20, P1; Tishby N, 2015, 2015 IEEE INFORMATION THEORY WORKSHOP (ITW); Valverde-Albacete FJ, 2019, ENTROPY-SWITZ, V21, DOI 10.3390/e21010046; van Erven T, 2014, IEEE T INFORM THEORY, V60, P3797, DOI 10.1109/TIT.2014.2320500; Verdu S., 2015, 2015 INF THEOR APPL, P1; Wang Z., 2020, GENERATIVE ADVERSARI; Wickstrom K., 2019, ARXIV190911396; Zaidi A, 2020, ENTROPY-SWITZ, V22, DOI 10.3390/e22020151; Zhao MY, 2020, AAAI CONF ARTIF INTE, V34, P6901 61 0 0 0 0 MIT PRESS CAMBRIDGE ONE ROGERS ST, CAMBRIDGE, MA 02142-1209 USA 0899-7667 1530-888X NEURAL COMPUT Neural Comput. AUG 19 2021 33 9 2473 2510 10.1162/neco_a_01416 38 Computer Science, Artificial Intelligence; Neurosciences Computer Science; Neurosciences & Neurology UC1XU WOS:000686328000005 34412112 Green Submitted, Bronze 2021-09-15 J Li, X; Fang, M; Li, HK Li, Xiao; Fang, Min; Li, Haikun Bias alleviating generative adversarial network for generalized zero-shot classification IMAGE AND VISION COMPUTING English Article Generalized zero shot classification; Generative adversarial network; Unseen visual prototypes; Cluster centers; Semantic relationships Generalized zero-shot classification is predicting the labels of the test images coming from seen or unseen classes. The task is difficult because of the bias problem, that is, unseen samples are easily to be misclassified to seen classes. Many methods have handled the problem by training a generative adversarial network (GAN) to generate fake samples. However, the GAN model trained with seen samples might not be appropriate for generating unseen samples. For dealing with this problem, we learn a bias alleviating generative adversarial network for generalized zero-shot classification by generating seen and unseen samples, simultaneously. We train the generator to generate more realistic unseen samples by adding semantic similarity and cluster center regularizations to alleviate the bias problem. The semantic similarity regularization is to restrict the relationships of the generated unseen visual prototypes and seen visual prototypes by their class prototypes to avoid the generated unseen samples similar to the seen samples. The cluster center regularization is to utilize the cluster property of target data to make the generated unseen visual prototypes near to the most similar cluster centers, generating realistic unseen samples. From the experiments, we can see the proposed method achieves promising results. (C) 2020 Elsevier B.V. All rights reserved. [Li, Xiao; Fang, Min; Li, Haikun] Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China Fang, M (corresponding author), Xidian Univ, Sch Comp Sci & Technol, Xian 710071, Peoples R China. mfang@mail.xidian.edu.cn National Natural Science Foundation of ChinaNational Natural Science Foundation of China (NSFC) [61806155]; China Postdoctoral Science FoundationChina Postdoctoral Science Foundation [2018M631125]; Fundamental Research Funds for the Central UniversitiesFundamental Research Funds for the Central Universities [XJS200303]; National Natural Science Foundation of shaanxi province [2020JQ-323, 2020GY-062]; Nature Science Foundation of Anhui ProvinceNatural Science Foundation of Anhui Province [1908085MF186] This work is supported by National Natural Science Foundation of China under Grant no. 61806155, China Postdoctoral Science Foundation funded project under Grant no. 2018M631125, Fundamental Research Funds for the Central Universities under Grant no. XJS200303, National Natural Science Foundation of shaanxi province (Grant No. 2020JQ-323, 2020GY-062), Nature Science Foundation of Anhui Province under Grant no. 1908085MF186. Akata Zeynep, 2013, NIPS WORKSH OUTP REP; Atzmon Y, 2019, PROC CVPR IEEE, P11663, DOI 10.1109/CVPR.2019.01194; Changpinyo S, 2017, IEEE I CONF COMP VIS, P3496, DOI 10.1109/ICCV.2017.376; Changpinyo S, 2016, PROC CVPR IEEE, P5327, DOI 10.1109/CVPR.2016.575; Chao WL, 2016, LECT NOTES COMPUT SC, V9906, P52, DOI 10.1007/978-3-319-46475-6_4; Elhoseiny M, 2013, IEEE I CONF COMP VIS, P2584, DOI 10.1109/ICCV.2013.321; Farhadi A, 2009, PROC CVPR IEEE, P1778, DOI 10.1109/CVPRW.2009.5206772; Felix R, 2018, LECT NOTES COMPUT SC, V11210, P21, DOI 10.1007/978-3-030-01231-1_2; Fu YW, 2014, LECT NOTES COMPUT SC, V8690, P584, DOI 10.1007/978-3-319-10605-2_38; Gan C, 2016, PROC CVPR IEEE, P87, DOI 10.1109/CVPR.2016.17; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; Huang H, 2019, PROC CVPR IEEE, P801, DOI 10.1109/CVPR.2019.00089; Jiang B, 2017, PROC CVPR IEEE, P550, DOI 10.1109/CVPR.2017.66; Kodirov E, 2017, PROC CVPR IEEE, P4447, DOI 10.1109/CVPR.2017.473; Kodirov E, 2015, IEEE I CONF COMP VIS, P2452, DOI 10.1109/ICCV.2015.282; Lampert CH, 2014, IEEE T PATTERN ANAL, V36, P453, DOI 10.1109/TPAMI.2013.140; Li Jian, 2019, CVPR; Li JJ, 2019, PROC CVPR IEEE, P7394, DOI 10.1109/CVPR.2019.00758; Li X, 2018, KNOWL-BASED SYST, V160, P176, DOI 10.1016/j.knosys.2018.06.034; Li X, 2017, NEUROCOMPUTING, V238, P76, DOI 10.1016/j.neucom.2017.01.038; Liu S., 2018, ADV NEURAL INFORM PR, P2005; Liu ZZ, 2020, IMAGE VISION COMPUT, V98, DOI 10.1016/j.imavis.2020.103924; Long Y, 2018, IEEE T PATTERN ANAL, V40, P2498, DOI 10.1109/TPAMI.2017.2762295; Mirza M., 2014, ARXIV14111784; Ni J., 2019, NEURIPS; Norouzi M, 2014, P INT C LEARN REPR; Patterson G, 2012, PROC CVPR IEEE, P2751, DOI 10.1109/CVPR.2012.6247998; Paul A, 2019, PROC CVPR IEEE, P7049, DOI 10.1109/CVPR.2019.00722; Romera-Paredes B, 2015, PR MACH LEARN RES, V37, P2152; Schonfeld E, 2019, PROC CVPR IEEE, P8239, DOI 10.1109/CVPR.2019.00844; Shigeto Y, 2015, LECT NOTES ARTIF INT, V9284, P135, DOI 10.1007/978-3-319-23528-8_9; Socher R., 2013, ADV NEURAL INFORM PR, DOI DOI 10.1007/978-3-319-46478-7; Song J, 2018, PROC CVPR IEEE, P1024, DOI 10.1109/CVPR.2018.00113; van der Maaten L, 2008, J MACH LEARN RES, V9, P2579; Wah Catherine, 2011, CALTECH UCSD BIRDS 2; Wang DH, 2016, THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2145; Xian Y., 2018, IEEE T PATTERN ANAL; Xian YQ, 2019, PROC CVPR IEEE, P10267, DOI 10.1109/CVPR.2019.01052; Xian YQ, 2018, PROC CVPR IEEE, P5542, DOI 10.1109/CVPR.2018.00581; Xian YQ, 2016, PROC CVPR IEEE, P69, DOI 10.1109/CVPR.2016.15; Xu X, 2017, PROC CVPR IEEE, P2007, DOI 10.1109/CVPR.2017.217; Zhang, 2018, IEEE T CYBERNETICS; Zhang F., 2019, INT C MACH LEARN, P7434; Zhang Hongguang, 2018, ECCV; Zhang L, 2017, PROCEEDINGS OF 2017 INTERNATIONAL CONFERENCE ON PUBLIC ADMINISTRATION (12TH) & INTERNATIONAL SYMPOSIUM ON WEST AFRICAN STUDIES (1ST), VOL I, P207; ZHU Y, 2018, IEEE INT CONF COMM, P4321 47 1 1 1 3 ELSEVIER AMSTERDAM RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS 0262-8856 1872-8138 IMAGE VISION COMPUT Image Vis. Comput. JAN 2021 105 104077 10.1016/j.imavis.2020.104077 9 Computer Science, Artificial Intelligence; Computer Science, Software Engineering; Computer Science, Theory & Methods; Engineering, Electrical & Electronic; Optics Computer Science; Engineering; Optics PY3ZH WOS:000611984800011 2021-09-15 J Ngxande, M; Tapamo, JR; Burke, M Ngxande, Mkhuseli; Tapamo, Jules-Raymond; Burke, Michael Bias Remediation in Driver Drowsiness Detection Systems Using Generative Adversarial Networks IEEE ACCESS English Article Population bias; GAN; visualisation; CNN Datasets are crucial when training a deep neural network. When datasets are unrepresentative, trained models are prone to bias because they are unable to generalise to real world settings. This is particularly problematic for models trained in specific cultural contexts, which may not represent a wide range of races, and thus fail to generalise. This is a particular challenge for driver drowsiness detection, where many publicly available datasets are unrepresentative as they cover only certain ethnicity groups. Traditional augmentation methods are unable to improve a model & x2019;s performance when tested on other groups with different facial attributes, and it is often challenging to build new, more representative datasets. In this paper, we introduce a novel framework that boosts the performance of detection of drowsiness for different ethnicity groups. Our framework improves Convolutional Neural Network (CNN) trained for prediction by using Generative Adversarial networks (GAN) for targeted data augmentation based on a population bias visualisation strategy that groups faces with similar facial attributes and highlights where the model is failing. A sampling method selects faces where the model is not performing well, which are used to fine-tune the CNN. Experiments show the efficacy of our approach in improving driver drowsiness detection for under represented ethnicity groups. Here, models trained on publicly available datasets are compared with a model trained using the proposed data augmentation strategy. Although developed in the context of driver drowsiness detection, the proposed framework is not limited to the driver drowsiness detection task, but can be applied to other applications. [Ngxande, Mkhuseli] Univ KwaZulu Natal, Sch Engn, ZA-4041 Durban, South Africa; [Tapamo, Jules-Raymond] Univ KwaZulu Natal, Sch Engn, Comp Sci & Engn, ZA-4041 Durban, South Africa; [Burke, Michael] Univ Edinburgh, Inst Percept Action & Behav, Sch Informat, Edinburgh EH8 9AB, Midlothian, Scotland Ngxande, M (corresponding author), Univ KwaZulu Natal, Sch Engn, ZA-4041 Durban, South Africa. mngxande@gmail.com Burke, Michael/AAI-8023-2020; Ngxande, Mkhuseli/AAT-6180-2020 Burke, Michael/0000-0001-7426-1498; Ngxande, Mkhuseli/0000-0001-6780-532X University of Kwa-Zulu Natal This work was supported by the University of Kwa-Zulu Natal. Abiteboul S, 2017, PROCEEDINGS OF THE 19TH INTERNATIONAL SYMPOSIUM ON PRINCIPLES AND PRACTICE OF DECLARATIVE PROGRAMMING (PPDP 2017), P1, DOI 10.1145/3131851.3131854; Antoniou A, 2018, LECT NOTES COMPUT SC, V11141, P594, DOI 10.1007/978-3-030-01424-7_58; Arjovsky M., 2017, ARXIV170107875, P214; Awais M, 2017, SENSORS-BASEL, V17, DOI 10.3390/s17091991; Bashivan P., 2015, ARXIV151106448; Benthall S, 2019, FAT*'19: PROCEEDINGS OF THE 2019 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, P289, DOI 10.1145/3287560.3287575; Bodnar Cristian, 2018, ARXIV180500676; Buolamwini Joy, 2018, P MACH LEARN RES C F, P77; Choi Y., 2017, ARXIV171109020; de Naurois CJ, 2019, ACCIDENT ANAL PREV, V126, P95, DOI 10.1016/j.aap.2017.11.038; De-Arteaga M., 2019, ARXIV190109451; Drewes C., 2000, TESTED STUD LAB TEAC, V21, P248; Folane N. R., 2012, INT J INNOV RES COMP, V4, P2257; Garvie C., 2016, PERPETUAL LINE UNREG; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I, 2017, ARXIV170400028; Gupta R., 2019, ARXIV190206818; He KM, 2016, PROC CVPR IEEE, P770, DOI 10.1109/CVPR.2016.90; Isola P, 2016, CORR; Jangid S., 2018, INT J APPL ENG RES, V13, P14657; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Knipling R. R., 1994, P IVHS AM, V1, P245; Li Chuan, 2016, ARXIV160404382; Li DX, 2017, CHIN CONT DECIS CONF, P1039, DOI 10.1109/CCDC.2017.7978672; Li M., 2015, INT J SIGNAL PROCESS, V8, P107; Luthra A., 2016, ECHO MADE EASY; Massoz Q., 2016, WINT C APPL COMP VIS, P1; Mirza M., 2014, ARXIV14111784; Mok TCW, 2018, ARXIV180511291; Ngiam J., 2018, ARXIV181107056; Ngxande M., 2017, PATT REC ASS S AFR R; Ngxande M., 2019, ARXIV190412631; Ngxande M, 2019, 2019 SOUTHERN AFRICAN UNIVERSITIES POWER ENGINEERING CONFERENCE/ROBOTICS AND MECHATRONICS/PATTERN RECOGNITION ASSOCIATION OF SOUTH AFRICA (SAUPEC/ROBMECH/PRASA), P111, DOI 10.1109/RoboMech.2019.8704766; Parris Jon, 2011, 2011 International Joint Conference on Biometrics (IJCB); Radford A., 2015, ARXIV PREPRINT ARXIV; Raji I. D., 2020, ARXIV200100964; Rajput M. V., 2013, INT J COMPUT APPL, V62, P6; Rhue Lauren, 2019, EMOTION READING TECH; Sahayadhas A, 2012, SENSORS-BASEL, V12, P16937, DOI 10.3390/s121216937; Shadowen N., 2019, TRANSHUMANISM HDB, P247; Shima R, 2017, INT CONF INTEL INFOR, P135, DOI 10.1109/ICIIBMS.2017.8279704; STEWART GW, 1993, SIAM REV, V35, P551, DOI 10.1137/1035134; Ulyanov D., 2016, ARXIV160708022; Wang X., 2018, ARXIV180900219; Wu E., 2018, ARXIV180708093 45 3 3 1 5 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2169-3536 IEEE ACCESS IEEE Access 2020 8 55592 55601 10.1109/ACCESS.2020.2981912 10 Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications Computer Science; Engineering; Telecommunications LB6NH WOS:000524750000071 Green Submitted, gold 2021-09-15 J Oh, JH; Hong, JY; Baek, JG Oh, Joo-Hyuk; Hong, Jae Yeol; Baek, Jun-Geol Oversampling method using outlier detectable generative adversarial network EXPERT SYSTEMS WITH APPLICATIONS English Article Class imbalance problem; Oversampling; Generative adversarial network; Outlier detection A class imbalance problem occurs when a particular class of data is significantly more or less than another class of data. This problem is difficult to solve; however, solutions such as the oversampling method using synthetic minority oversampling technique (SMOTE) or conditional generative adversarial network (cGAN) have been suggested recently to solve this problem. In the case of SMOTE and their variations, it is possible to generate biased artificial data because it does not consider the entire data in the minority class. To overcome this problem, an oversampling method using cGAN has been proposed. However, such a method does not consider the majority class that affects the classification boundary. In particular, if there is an outlier in the majority class, the classification boundary may be biased. This paper presents an oversampling method using outlier detectable generative adversarial network (OD-GAN) to solve this problem. We use a discriminator, which is used only for training purposes in cGAN, as an outlier detector to quantify the difference between the distributions of the majority and minority classes. The discriminator can detect and remove outliers. This prevents the distortion of the classification boundary caused by outliers. The generator imitates the distribution of the minority class and generates artificial data to balance the dataset. We experiment with various datasets, oversampling techniques, and classifiers. The empirical results show that the performance of OD-GAN is better than those of other oversampling methods for imbalanced datasets with outliers. (C) 2019 Elsevier Ltd. All rights reserved. [Oh, Joo-Hyuk; Hong, Jae Yeol; Baek, Jun-Geol] Korea Univ, Sch Ind Management Engn, 145 Anam Ro, Seoul 02841, South Korea Baek, JG (corresponding author), Korea Univ, Sch Ind Management Engn, 145 Anam Ro, Seoul 02841, South Korea. juheuk007@korea.ac.kr; visar@korea.ac.kr; jungeol@korea.ac.kr National Research Foundation of Korea (NRF) - Korea government (MSIT) [NRF-2019R1A2C2005949]; BK21 Plus program (Big Data in Manufacturing and Logistics Systems, Korea University); Samsung Electronics Co., Ltd. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2019R1A2C2005949). This work was also supported by the BK21 Plus program (Big Data in Manufacturing and Logistics Systems, Korea University) and by Samsung Electronics Co., Ltd. Akbani R, 2004, LECT NOTES COMPUT SC, V3201, P39, DOI 10.1007/978-3-540-30115-8_7; Alcala-Fdez J, 2011, J MULT-VALUED LOG S, V17, P255; Barua S, 2014, IEEE T KNOWL DATA EN, V26, P405, DOI 10.1109/TKDE.2012.232; Bradley AP, 1997, PATTERN RECOGN, V30, P1145, DOI 10.1016/S0031-3203(96)00142-2; Chawla N. V., 2004, ACM SIGKDD EXPLORATI, V6, P1, DOI DOI 10.1145/1007730.1007733; Chawla NV, 2010, DATA MINING AND KNOWLEDGE DISCOVERY HANDBOOK, SECOND EDITION, P875, DOI 10.1007/978-0-387-09823-4_45; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Chen J, 2018, 2018 IEEE 9TH ANNUAL INFORMATION TECHNOLOGY, ELECTRONICS AND MOBILE COMMUNICATION CONFERENCE (IEMCON), P1054, DOI 10.1109/IEMCON.2018.8614815; Creswell A, 2018, IEEE SIGNAL PROC MAG, V35, P53, DOI 10.1109/MSP.2017.2765202; Domingos P., 1999, P 5 ACM SIGKDD INT C, V99, P155, DOI DOI 10.1145/312129.312220; Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Freeman J, 1995, OUTLIERS STAT DATA; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Haibo H., 2016, ADAPTIVE SYNTHETIC S, V8, P1322; Han H, 2005, LECT NOTES COMPUT SC, V3644, P878, DOI 10.1007/11538059_91; Isola P, 2017, PROC CVPR IEEE, P5967, DOI 10.1109/CVPR.2017.632; Kerdprasop K., 2011, INT J MECH SCI, V5, P336; Kim T, 2017, PR MACH LEARN RES, V70; Ledig C., 2017, P IEEE C COMP VIS PA, P4681; Lin WC, 2017, INFORM SCIENCES, V409, P17, DOI 10.1016/j.ins.2017.05.008; Liu XY, 2009, IEEE T SYST MAN CY B, V39, P539, DOI 10.1109/TSMCB.2008.2007853; Mazurowski MA, 2008, NEURAL NETWORKS, V21, P427, DOI 10.1016/j.neunet.2007.12.031; Reed S., 2016, ADV NEURAL INFORM PR, P1; Schlegl T, 2017, LECT NOTES COMPUT SC, V10265, P146, DOI 10.1007/978-3-319-59050-9_12; Wei W, 2013, WORLD WIDE WEB, V16, P449, DOI 10.1007/s11280-012-0178-0; Zheng YJ, 2018, NEURAL NETWORKS, V102, P78, DOI 10.1016/j.neunet.2018.02.015 26 5 7 1 24 PERGAMON-ELSEVIER SCIENCE LTD OXFORD THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND 0957-4174 1873-6793 EXPERT SYST APPL Expert Syst. Appl. NOV 1 2019 133 1 8 10.1016/j.eswa.2019.0.5.006 8 Computer Science, Artificial Intelligence; Engineering, Electrical & Electronic; Operations Research & Management Science Computer Science; Engineering; Operations Research & Management Science IF5JO WOS:000473117300001 2021-09-15 J Tschaepe, M Tschaepe, Mark Pragmatic Ethics for Generative Adversarial Networks: Coupling, Cyborgs, and Machine Learning CONTEMPORARY PRAGMATISM English Article machine learning; generative adversarial networks; bias; coupling; ethics of technology ARTIFICIAL-INTELLIGENCE; ALGORITHMS; MODELS This article addresses the need for adaptive ethical analysis within machine learning that accounts for emerging problems concerning social bias and generative adversarial networks (GAN S). I use John Dewey's criticisms of the reflex arc concept in psychology as a basis for understanding how these problems stem from human-gan interaction. By combining Dewey's criticisms with Donna Haraway's idea of cyborgs, Luciano Floridi's concept of distributed morality, and Shaowen Bardzell's recommendations for a feminist approach to human- computer interaction, I suggest a dynamic perspective from which to begin analyzing and solving issues of injustice evident in this particular domain of machine learning. [Tschaepe, Mark] Prairie View A&M Univ, Div Social Work Behav & Polit Sci, Philosophy, Prairie View, TX 77446 USA Tschaepe, M (corresponding author), Prairie View A&M Univ, Div Social Work Behav & Polit Sci, Philosophy, Prairie View, TX 77446 USA. mdtschaepe@pvamu.edu Ananny M, 2016, SCI TECHNOL HUM VAL, V41, P93, DOI 10.1177/0162243915606523; Angwin Julia, 2016, PROPUBLICA; Asaro PM, 2019, IEEE TECHNOL SOC MAG, V38, P40, DOI 10.1109/MTS.2019.2915154; Bardzell S, 2011, 29TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, P675; Bardzell S, 2010, CHI2010: PROCEEDINGS OF THE 28TH ANNUAL CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, VOLS 1-4, P1301; Benjamin R, 2019, RACE TECHNOLOGY ABOL; Borning Alan, 2012, P SIGCHI C HUM FACT, P1125, DOI DOI 10.1145/2207676.2208560; Borowiec Steven, 2016, THE GUARDIAN, V15; Brey PAE, 2012, NANOETHICS, V6, P1, DOI 10.1007/s11569-012-0141-7; Broussard M, 2018, ARTIFICIAL UNINTELLIGENCE: HOW COMPUTERS MISUNDERSTAND THE WORLD; Challen R, 2019, BMJ QUAL SAF, V28, P231, DOI 10.1136/bmjqs-2018-008370; Ciston S, 2019, J SCI TECHNOL ARTS, V11, P3, DOI 10.7559/citarj.v11i2.665; Clark A, 1998, ANALYSIS, V58, P7, DOI 10.1111/1467-8284.00096; Clark A., 2010, EXTENDED MIND, P43, DOI [10.7551/mitpress/9780262014038.003.0003, DOI 10.7551/MITPRESS/9780262014038.003.0003, DOI 10.7551/MITPRESS/9780262014038.001.0001]; Dastin J., 2018, REUTERS; De Preester H, 2011, FOUND SCI, V16, P119, DOI 10.1007/s10699-010-9188-5; Dewey J., 1972, EARLY WORKS, V5, P96; Dewey John, 1972, J DEWEY EARLY WORKS, V5, P192; Floridi L, 2013, SCI ENG ETHICS, V19, P727, DOI 10.1007/s11948-012-9413-4; Friedman B, 2019, VALUE SENSITIVE DESIGN: SHAPING TECHNOLOGY WITH MORAL IMAGINATION, P1, DOI 10.7551/mitpress/7585.001.0001; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Goodley D, 2014, SUBJECTIVITY, V7, P342, DOI 10.1057/sub.2014.15; Haefner Joel, 2019, A B AUTOBIOGRAPHY ST, V34, P403; Haraway Donna, 1991, SIMIANS CYBORGS WOME, P149, DOI DOI 10.1007/978-1-4020-3803-7_4; HAUSSLER D, 1988, ARTIF INTELL, V36, P177, DOI 10.1016/0004-3702(88)90002-1; Ihde Don, 1979, TECHNICS PRAXIS PHIL, V24; Ihde Don, 2002, BODIES TECHNOLOGY; Ihde Don, 1995, POSTPHENOMENOLOGY ES; Johnson Mark, BODY LANGUAGE MIND, V1, P17; Keeling K, 2014, CINEMA J, V53, P152, DOI 10.1353/cj.2014.0004; Lillywhite Aspen, 2021, Assist Technol, V33, P129, DOI 10.1080/10400435.2019.1593259; Merleau-Ponty Maurice, 1945, PHENOMENOLOGIE PERCE; Muller Vincent C, 2021, ROUTLEDGE SOCIAL SCI, P1; Munnik Rene, 2001, AM PHILOS TECHNOLOGY, P95; Neil C., 2016, WEAPONS MATH DESTRUC; Ninareh M., 2019, ARXIV PREPRINT ARXIV; Noble, 2018, ALGORITHMS OPPRESSIO; Paez A, 2019, MIND MACH, V29, P441, DOI 10.1007/s11023-019-09502-w; Richards DP, 2019, CONTEMP PRAGMAT, V16, P366, DOI 10.1163/18758185-01604007; Rudin C, 2019, NAT MACH INTELL, V1, P206, DOI 10.1038/s42256-019-0048-x; Saltz JS, 2018, SIGCSE'18: PROCEEDINGS OF THE 49TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, P952, DOI 10.1145/3159450.3159483; Sejnowski TJ, 2018, DEEP LEARNING REVOLUTION, P1; Shook JR, 2016, CAMB Q HEALTHC ETHIC, V25, P120, DOI 10.1017/S0963180115000377; Still A., 2016, P 7 COMP CREAT C ICC; Still A, 2019, ARTS, V8, DOI 10.3390/arts8010036; Subbarao Kambhampati, 2018, ARXIV PREPRINT ARXIV; Tavani H., 2008, HDB INFORM COMPUTER, P69, DOI DOI 10.1145/242485.242493; van de Vijver F.J.R., 1998, CROSS CULTURAL SURVE, P41; Venturelli A. N, 2012, PRAGMATISM TODAY, V3, P132; Wang FY, 2016, IEEE-CAA J AUTOMATIC, V3, P113, DOI 10.1109/JAS.2016.7471613 50 0 0 1 1 BRILL LEIDEN PLANTIJNSTRAAT 2, P O BOX 9000, 2300 PA LEIDEN, NETHERLANDS 1572-3429 1875-8185 CONTEMP PRAGMAT Contemp. Pragmat. MAY 2021 18 1 95 111 10.1163/18758185-BJA10005 17 Philosophy Philosophy SL8BW WOS:000657139400006 2021-09-15 J Jimenez, F; Koepke, A; Gregg, M; Frey, M Jimenez, Felix; Koepke, Amanda; Gregg, Mary; Frey, Michael Generative Adversarial Network Performance in Low-Dimensional Settings JOURNAL OF RESEARCH OF THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY English Article earth mover distance; experiment protocol; generative adversarial network; mode tunneling; modeling error; target distribution complexity A generative adversarial network (GAN) is an artifcial neural network with a distinctive training architecture, designed to create examples that faithfully reproduce a target distribution. GANs have recently had particular success in applications involving high-dimensional distributions in areas such as image processing. Little work has been reported for low dimensions, where properties of GANs may be better identifed and understood. We studied GAN performance in simulated low-dimensional settings, allowing us to transparently assess effects of target distribution complexity and training data sample size on GAN performance in a simple experiment. This experiment revealed two important forms of GAN error, tail underflling and bridge bias, where the latter is analogous to the tunneling observed in high-dimensional GANs. [Jimenez, Felix; Koepke, Amanda; Gregg, Mary; Frey, Michael] NIST, Stat Engn Div, Gaithersburg, MD 20899 USA; [Jimenez, Felix] Univ Colorado, Boulder, CO 80309 USA Jimenez, F (corresponding author), NIST, Stat Engn Div, Gaithersburg, MD 20899 USA.; Jimenez, F (corresponding author), Univ Colorado, Boulder, CO 80309 USA. felix.jimenez@nist.gov; amanda.koepke@nist.gov; mary.gregg@nist.gov; michael.frey@nist.gov Arjovsky M., 2017, ARXIV170107875, P214; Arjovsky M, 2017, ARXIV PREPRINTARXIV; Arora S, 2017, PR MACH LEARN RES, V70; Auricchio G, 2018, P5793; Bau D., 2018, ARXIV PREPRINT ARXIV; Borji A, 2019, COMPUT VIS IMAGE UND, V179, P41, DOI 10.1016/j.cviu.2018.10.009; Brock A, 2018, P PERVASIVE DISPLAYS; Cha SH, 2007, MATH MOD METH APPL S, V1, P300, DOI DOI 10.1007/S00167-009-0884-Z; Creswell A, 2018, IEEE SIGNAL PROC MAG, V35, P53, DOI 10.1109/MSP.2017.2765202; Cuturi M., 2013, NIPS, P2292; DOBRUSHIN RL, 1970, THEOR PROBAB APPL+, V15, P458, DOI 10.1137/1115049; Flamary R, 2019, POT PYTHON OPTIMAL T; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Grauman K, 2004, P 2004 IEEECOMPUTER, DOI [10.1109/CVPR.2004.1315035, DOI 10.1109/CVPR.2004.1315035]; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; Hwang U, 2017, ARXIV PREPRINT ARXIV; Isola P, 2017, PROC CVPR IEEE, P5967, DOI 10.1109/CVPR.2017.632; Jin Y, 2017, ARXIV PREPRINT ARXIV; Karras T, 2017, ARXIVPREPRINT ARXIV1; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Kutner M. H., 2005, APPL LINEAR STAT MOD, V5; Lala S, 2018, P THEIEEE HIGH PERF; Lecun Y, 1998, P IEEE, V86, P2278, DOI 10.1109/5.726791; Lee J. D., 2016, C LEARN THEOR; Lim S, 2019, PROCEEDINGS OF THE 2019 ANNUAL ACM SOUTHEAST CONFERENCE (ACMSE 2019), P262, DOI 10.1145/3299815.3314482; Marsland S., 2015, MACHINE LEARNING ALG; Mescheder L, 2018, PR MACH LEARN RES, V80; Mescheder Lars, 2017, ADV NEURAL INFORM PR; Mirza M., 2014, ARXIV14111784; Monge G, 1781, HIST LACADEMIE ROYAL; Mustafa Mustafa, 2019, Computational Astrophysics and Cosmology, V6, DOI 10.1186/s40668-019-0029-9; Nagarajan Vaishnavh, 2017, ADV NEURAL INFORM PR, P5585; PELEG S, 1989, IEEE T PATTERN ANAL, V11, P739, DOI 10.1109/34.192468; Putin E, 2018, MOL PHARMACEUT, V15, P4386, DOI 10.1021/acs.molpharmaceut.7b01137; Radford A., 2015, ARXIV PREPRINT ARXIV; Rubner Y, 1998, SIXTH INTERNATIONAL CONFERENCE ON COMPUTER VISION, P59, DOI 10.1109/ICCV.1998.710701; Salimans T, 2016, ADV NEURAL INFORM PR, P2234, DOI DOI 10.5555/3157096.3157346; Sonderby CK, 2016, ARXIV PREPRINTARXIV; Theis L., 2015, ARXIV PREPRINT ARXIV; Zhu JY, 2017, IEEE I CONF COMP VIS, P2242, DOI 10.1109/ICCV.2017.244 40 0 0 0 0 NATL INST STANDARDS & TECHNOLOGY-NIST GAITHERSBURG INFORMATION SERVICE OFFICE, GAITHERSBURG, MD 20899 USA 1044-677X 2165-7254 J RES NATL INST STAN J. Res. Natl. Inst. Stand. Technol. APR 20 2021 126 126008 10.6028/jres.126.008 17 Instruments & Instrumentation; Physics, Applied Instruments & Instrumentation; Physics RW2MD WOS:000646359500001 gold 2021-09-15 J Lowney, B; Lokmer, I; O'Brien, GS; Bean, CJ Lowney, Brydon; Lokmer, Ivan; O'Brien, Gareth S.; Bean, Christopher J. Pre-migration diffraction separation using generative adversarial networks GEOPHYSICAL PROSPECTING English Article Data processing; Imaging; Seismics VELOCITY ANALYSIS; WAVE-FIELD Diffraction imaging is the process of separating diffraction events from the seismic wavefield and imaging them independently, highlighting subsurface discontinuities. While there are many analytic-based methods for diffraction imaging which use kinematic, dynamic or both, properties of the diffracted wavefield, they can be slow and require parameterization. Here, we propose an image-to-image generative adversarial network to automatically separate diffraction events on pre-migrated seismic data in a fraction of the time of conventional methods. To train the generative adversarial network, plane-wave destruction was applied to a range of synthetic and real images from field data to create training data. These training data were screened and any areas where the plane-wave destruction did not perform well, such as synclines and areas of complex dip, were removed to prevent bias in the neural network. A total of 14,132 screened images were used to train the final generative adversarial network. The trained network has been applied across several geologically distinct field datasets, including a 3D example. Here, generative adversarial network separation is shown to be comparable to a benchmark separation created with plane-wave destruction, and up to 12 times faster. This demonstrates the clear potential in generative adversarial networks for fast and accurate diffraction separation. [Lowney, Brydon; Lokmer, Ivan; O'Brien, Gareth S.] Univ Coll Dublin, Sch Earth Sci, Dublin D04 V1W8, Ireland; [Lowney, Brydon; Lokmer, Ivan] Univ Coll Dublin, Irish Ctr Res Appl Geosci, Dublin D04 V1W8, Ireland; [O'Brien, Gareth S.] Tullow Oil Ltd, Appl Geophys & Technol, Dublin D18 NH10, Ireland; [Bean, Christopher J.] Dublin Inst Adv Studies, Sch Cosm Phys, Dublin D02 Y006, Ireland Lowney, B (corresponding author), Univ Coll Dublin, Sch Earth Sci, Dublin D04 V1W8, Ireland.; Lowney, B (corresponding author), Univ Coll Dublin, Irish Ctr Res Appl Geosci, Dublin D04 V1W8, Ireland. brydon.lowney@ucdconnect.ie Lokmer, Ivan/AAP-9538-2021 Lokmer, Ivan/0000-0001-7009-1583; Bean, Christopher/0000-0003-3285-2446; O'Brien, Gareth/0000-0002-7345-0286; Lowney, Brydon/0000-0002-0894-1249 Science Foundation Ireland (SFI)Science Foundation Ireland [13/RC/2092]; European Regional Development Fund by PIPCO RSG This research has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under grant number 13/RC/2092 and is co-funded under the European Regional Development Fund by PIPCO RSG and its member companies. The authors extend their gratitude to Tullow Oil and the Petroleum Affairs Division of Ireland for providing field data used in training. The authors would also like to thank Song Hou, Henning Hoeber and Ewa Kaszycka of CGG for their discussions on neural networks and diffractions. Finally, the authors would like to thank Shearwater for providing an academic license for Shearwater Reveal, which was used in this study. Aharchaou M., 2020, LEADING EDGE, V39, P718; Alotaibi A, 2020, SYMMETRY-BASEL, V12, DOI 10.3390/sym12101705; Basheer IA, 2000, J MICROBIOL METH, V43, P3, DOI 10.1016/S0167-7012(00)00201-3; Bean C.J., 2020, 1 EAGE ANN C, P1; Berkovitch A, 2009, GEOPHYSICS, V74, pWCA75, DOI 10.1190/1.3198210; Biloti R., 2014, 84 ANN INT M, P4816, DOI DOI 10.1190/SEGAM2014-1168.1; Bonnefoy-Claudet S, 2006, EARTH-SCI REV, V79, P205, DOI 10.1016/j.earscirev.2006.07.004; Brownlee J., 2019, DEV PIX2PIX GAN IMAG; Chen ZH, 2013, GEOPHYSICS, V78, pV1, DOI 10.1190/geo2012-0142.1; Childs, 2016, NATL ARCH MARINE SEI; Claerbout J.F., 1985, FUNDAMENTALS GEOPHYS; Decker L, 2015, INTERPRETATION-J SUB, V3, pSF21, DOI 10.1190/INT-2014-0081.1; Dell S, 2011, GEOPHYSICS, V76, pS187, DOI [10.1190/GEO2010-0229.1, 10.1190/geo2010-0229.1]; Fehler M, 2011, SEAM PHASE 1 CHALLEN; Fomel, 2013, 83 ANN INT M SEG, P4054; Fomel S, 2002, GEOPHYSICS, V67, P1946, DOI 10.1190/1.1527095; Fomel S., 2013, J OPEN RES SOFTW, V1, pE8, DOI [10.5334/jors.ag, DOI 10.5334/J0RS.AG]; Fomel S, 2007, GEOPHYSICS, V72, pU89, DOI 10.1190/1.2781533; Gelius LJ, 2011, GEOPHYS PROSPECT, V59, P400, DOI 10.1111/j.1365-2478.2010.00928.x; Goodfellow I., 2016, C WORKSH NEUR INF PR, P1; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Hajela, 2020, 2 INT C INV RES COMP, P543; Han C., 2019, ACM INT C INF KNOWL; HARLAN WS, 1984, GEOPHYSICS, V49, P1869, DOI 10.1190/1.1441600; Ho Y, 2020, IEEE ACCESS, V8, P4806, DOI 10.1109/ACCESS.2019.2962617; Hoeber, 2019, 81 EAGE C EXH LOND, P1; Isola P, 2017, PROC CVPR IEEE, P5967, DOI 10.1109/CVPR.2017.632; Ji, 2019, P SIAM INT C DAT MIN, P630; Kahng M, 2019, IEEE T VIS COMPUT GR, V25, P310, DOI 10.1109/TVCG.2018.2864500; KANASEWICH ER, 1988, GEOPHYSICS, V53, P334, DOI 10.1190/1.1442467; Kennett B., 2001, SEISMIC WAVEFIELD IN, V1; Khaidukov V, 2004, GEOPHYSICS, V69, P1478, DOI 10.1190/1.1836821; Klokov A, 2012, GEOPHYSICS, V77, pS131, DOI [10.1190/geo2012-0017.1, 10.1190/GEO2012-0017.1]; Knerr S., 1990, Neurocomputing, Algorithms, Architectures and Applications. Proceedings of the NATO Advanced Research Workshop, P41; Koren Z., 2018, 80 EAGE C EXH COP DE, P1; Kozlov E, 2004, 74 ANN INT M SEG, P1131, DOI [10.1190/1.1851082, DOI 10.1190/1.1851082]; Krey T., 1952, GEOPHYSICS, V17, P843, DOI [10.1190/1.1437815, DOI 10.1190/1.1437815]; Landa, 2010, 72 EAGE C EXH BARC S; Landa E, 2006, GEOPHYS PROSPECT, V54, P491, DOI 10.1111/j.1365-2478.2006.00552.x; Landa E., 2008, 78 ANN INT M SEG, P2176, DOI DOI 10.1190/1.3059318; LANDA E, 2007, CONVENTIONAL SEISMIC; Li RR, 2018, IEEE J-STARS, V11, P3954, DOI 10.1109/JSTARS.2018.2833382; Lian S, 2018, J VIS COMMUN IMAGE R, V56, P296, DOI 10.1016/j.jvcir.2018.10.001; Martini F, 2001, GEOPHYS J INT, V145, P423, DOI 10.1046/j.1365-246x.2001.01391.x; Miller J., 2016, PROCESSING MULTICHAN; Moser, 2015, P UNC RES TECHN C, P1121, DOI DOI 10.2118/178538--MS; Moser TJ, 2008, GEOPHYS PROSPECT, V56, P627, DOI 10.1111/j.1365-2478.2007.00718.x; Nguyen Tu, 2017, P ADV NEUR INF PROC; O'Brien GS, 2020, GEOPHYS PROSPECT, V68, P1758, DOI 10.1111/1365-2478.12951; Oliveira DAB, 2018, IEEE GEOSCI REMOTE S, V15, P1952, DOI 10.1109/LGRS.2018.2866199; Radford A., 2016, INT C LEARN REPR SAN; Reshef M, 2009, GEOPHYS PROSPECT, V57, P811, DOI 10.1111/j.1365-2478.2008.00773.x; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Sankesara H., 2019, UNET INTRO SYMMETRY; Schwarz B, 2019, ADV GEOPHYS, V60, P1, DOI 10.1016/bs.agph.2019.05.001; Schwarz B, 2019, GEOPHYSICS, V84, pV157, DOI 10.1190/GEO2018-0368.1; Shekhar A., 2019, WHAT ARE L1 L2 LOSS; Sun J, 2020, GEOPHYS PROSPECT, V68, P845, DOI 10.1111/1365-2478.12893; Tran N., 2019, P NEURIPS, V32, P13253; TROREY AW, 1970, GEOPHYSICS, V35, P762, DOI 10.1190/1.1440129; Tschannen V, 2020, GEOPHYS PROSPECT, V68, P830, DOI 10.1111/1365-2478.12889; Xu R., 2017, C COMP VIS PATT REC; Yuan Y., 2020, GEOPHYSICS, V85, pIJA; Zhang TY, 2019, LECT NOTES COMPUT SC, V11767, P777, DOI 10.1007/978-3-030-32251-9_85 64 0 0 0 0 WILEY HOBOKEN 111 RIVER ST, HOBOKEN 07030-5774, NJ USA 0016-8025 1365-2478 GEOPHYS PROSPECT Geophys. Prospect. JUN 2021 69 5 949 967 10.1111/1365-2478.13086 APR 2021 19 Geochemistry & Geophysics Geochemistry & Geophysics SC8UG WOS:000641133600001 hybrid 2021-09-15 J Liu, JL; Li, WH; Pei, HB; Wang, Y; Qu, F; Qu, Y; Chen, YH Liu, Jialun; Li, Wenhui; Pei, Hongbin; Wang, Ying; Qu, Feng; Qu, You; Chen, Yuhao Identity Preserving Generative Adversarial Network for Cross-Domain Person Re-Identification IEEE ACCESS English Article Person re-identification; domain adaptation; style transfer; unsupervised learning CLASSIFICATION In this paper, we study the domain adaptive person re-identification(re-ID) problem: train a re-ID model on the labeled source domain and test it on the unlabeled target domain. It's known challenging due to the feature distribution bias between the source domain and target domain. The previous methods directly reduce the bias by image-to-image style translation between the source and the target domain in an unsupervised manner. However, these methods only consider the rough bias between the source domain and the target domain but neglect the detailed bias between the source domain and the target camera domains (divided by camera views), which contain critical factors influencing the testing performance of re-ID model. In this work, we particularly focus on the bias between the source domain and the target camera domains. To overcome this problem, a multi-domain image-to-image translation network, termed Identity Preserving Generative Adversarial Network (IPGAN) is proposed to learn the mapping relationship between the source domain and the target camera domains. IPGAN can translate the styles of images from the source domain to the target camera domains and generate many images with styles of target camera domains. Then the re-ID model is trained with the translated images generated by IPGAN. During the training of the re-ID model, we aim to learn the discriminative feature. We design and train a novel re-ID model, termed IBN-reID, in which Instance and Batch Normalization block (IBN-block) are introduced. Experimental results on Market-1501, DukeMTMC-reID and MSMT17 show that the images generated by IPGAN are more suitable for cross-domain re-ID. Very competitive re-ID accuracy is achieved by our method. [Liu, Jialun; Li, Wenhui; Pei, Hongbin; Wang, Ying; Qu, Feng; Qu, You; Chen, Yuhao] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Jilin, Peoples R China Li, WH (corresponding author), Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Jilin, Peoples R China. liwh@jlu.edu.cn Science and Technology Development Plan of Jilin Province [20170204020GX]; Development and Reform Commission of Jilin Province [2019C054-2]; National Natural Science Foundation of ChinaNational Natural Science Foundation of China (NSFC) [5180523] This work was supported in part by the Science and Technology Development Plan of Jilin Province under Grant 20170204020GX, in part by the Development and Reform Commission of Jilin Province under Grant 2019C054-2, and in part by the National Natural Science Foundation of China under Grant 5180523. Chen YH, 2018, KSII T INTERNET INF, V12, P392, DOI 10.3837/tiis.2018.01.019; Choi Yunjey, 2017, 1711 ARXIV; Dalal N, 2005, PROC CVPR IEEE, P886, DOI 10.1109/cvpr.2005.177; Deng WJ, 2018, PROC CVPR IEEE, P994, DOI 10.1109/CVPR.2018.00110; Fan HH, 2018, ACM T MULTIM COMPUT, V14, DOI 10.1145/3243316; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; He K., 2016, PROC CVPR IEEE, P770, DOI DOI 10.1109/CVPR.2016.90; Isola P, 2017, ARXIV161107004V, DOI DOI 10.1109/CVPR.2017.632; Kim T, 2017, PR MACH LEARN RES, V70; Leibe B, 2017, ARXIV170307737, DOI DOI 10.1007/978-3-319-46466-4_52; Liao SC, 2015, PROC CVPR IEEE, P2197, DOI 10.1109/CVPR.2015.7298832; Liu M.Y., 2016, ADV NEURAL INFORM PR, P469; Liu MY, 2017, ADV NEUR IN, V30; Liu ZM, 2017, IEEE I CONF COMP VIS, P2448, DOI 10.1109/ICCV.2017.266; Luo FL, 2019, IEEE T CYBERNETICS, V49, P2406, DOI 10.1109/TCYB.2018.2810806; PAN X, 2018, ECCV; Pang SC, 2018, NEURAL PROCESS LETT, V47, P859, DOI 10.1007/s11063-017-9720-5; Peng PX, 2016, PROC CVPR IEEE, P1306, DOI 10.1109/CVPR.2016.146; Ristani E, 2016, LECT NOTES COMPUT SC, V9914, P17, DOI 10.1007/978-3-319-48881-3_2; Song JF, 2019, PROC CVPR IEEE, P719, DOI 10.1109/CVPR.2019.00081; Song L., 2018, ARXIV180711334; Subramaniam A., 2016, ADV NEURAL INFORM PR, P2667; Sun Y., 2017, ARXIV171109349; Sun YF, 2017, IEEE I CONF COMP VIS, P3820, DOI 10.1109/ICCV.2017.410; Wang FQ, 2016, PROC CVPR IEEE, P1288, DOI 10.1109/CVPR.2016.144; Wang JY, 2018, PROC CVPR IEEE, P2275, DOI 10.1109/CVPR.2018.00242; Wang SJ, 2014, NEURAL PROCESS LETT, V39, P25, DOI 10.1007/s11063-013-9288-7; Wei LH, 2018, PROC CVPR IEEE, P79, DOI 10.1109/CVPR.2018.00016; Welling M., 2013, AUTOENCODING VARIATI; Xiao T, 2016, PROC CVPR IEEE, P1249, DOI 10.1109/CVPR.2016.140; Yu HX, 2017, IEEE I CONF COMP VIS, P994, DOI 10.1109/ICCV.2017.113; Zhang LF, 2019, INFORM SCIENCES, V485, P154, DOI 10.1016/j.ins.2019.02.008; Zhang L, 2016, PROC CVPR IEEE, P1239, DOI 10.1109/CVPR.2016.139; Zheng L., 2016, PERSON RE IDENTIFICA; Zheng L, 2015, IEEE I CONF COMP VIS, P1116, DOI 10.1109/ICCV.2015.133; Zheng ZD, 2017, IEEE I CONF COMP VIS, P3774, DOI 10.1109/ICCV.2017.405; Zhong Z, 2019, PROC CVPR IEEE, P598, DOI 10.1109/CVPR.2019.00069; Zhong Z, 2019, IEEE T IMAGE PROCESS, V28, P1176, DOI 10.1109/TIP.2018.2874313; Zhu F, 2015, COMPUT SCI INF SYST, V12, P787, DOI 10.2298/CSIS141114026Z; Zhu JY, 2017, IEEE I CONF COMP VIS, P2242, DOI 10.1109/ICCV.2017.244 40 2 2 1 2 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2169-3536 IEEE ACCESS IEEE Access 2019 7 114021 114032 10.1109/ACCESS.2019.2933910 12 Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications Computer Science; Engineering; Telecommunications IT6YK WOS:000483022100053 gold, Green Submitted 2021-09-15 C Xu, H; Cao, YA; Jia, RP; Liu, YB; Tan, JL IEEE Xu, Hao; Cao, Yanan; Jia, Ruipeng; Liu, Yanbing; Tan, Jianlong Sequence Generative Adversarial Network for Long Text Summarization 2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI) Proceedings-International Conference on Tools With Artificial Intelligence English Proceedings Paper 30th IEEE International Conference on Tools with Artificial Intelligence (ICTAI) NOV 05-07, 2018 Volos, GREECE IEEE, IEEE Comp Soc, Biol & Artificial Intelligence Fdn Sequence Generative Adversarial Network; Text Summarization; Deep learning; Reinforcement learning In this paper, we propose a new adversarial training framework for text summarization task. Although sequence-to sequence models have achieved state-of-the-art performance in abstractive summarization, the training strategy (MLE) suffers from exposure bias in the inference stage. This discrepancy between training and inference makes generated summaries less coherent and accuracy, which is more prominent in summarizing long articles. To address this issue, we model abstractive summarization using Generative Adversarial Network (GAN), aiming to minimize the gap between generated summaries and the ground-truth ones. This framework consists of two models: a generator that generates summaries, a discriminator that evaluates generated summaries. Reinforcement learning (RL) strategy is used to guarantee the co-training of generator and discriminator. Besides, motivated by the nature of summarization task, we design a novel Triple-RNNs discriminator, and extend the off-the-shelf generator by appending encoder and decoder with attention mechanism. Experimental results showed that our model significantly outperforms the state-of-the-art models, especially on long text corpus. [Xu, Hao; Jia, Ruipeng] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China; [Xu, Hao; Cao, Yanan; Jia, Ruipeng; Liu, Yanbing; Tan, Jianlong] Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China Xu, H (corresponding author), Univ Chinese Acad Sci, Sch Cyber Secur, Beijing, Peoples R China.; Xu, H (corresponding author), Chinese Acad Sci, Inst Informat Engn, Beijing, Peoples R China. xuhao2@iie.ac.cn; caoyaonan@iie.ac.cn; jiaruipeng@iie.ac.cn; liuyanbing@iie.ac.cn; tanjianlong@iie.ac.cn Abadi M., 2016, ARXIV PREPRINT ARXIV; Bengio S., 2015, ADV NEURAL INFORM PR; Chopra Sumit, 2016, P 2016 C N AM CHAPT, DOI DOI 10.18653/V1/N16-1012; COHN T., 2008, P 22 INT C COMP LING, P137; Conroy J. M., 2001, SIGIR Forum, P406; Denton E. L., 2015, ADV NEURAL INFORM PR, DOI DOI 10.5555/; Erkan G, 2004, J ARTIF INTELL RES, V22, P457, DOI 10.1613/jair.1523; Ferrier L., 2001, MAXIMUM ENTROPY APPR; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Hermann KM, 2015, ADV NEUR IN, V28; Huszar F., 2015, STAT, V1050, P16; Kalchbrenner Nal, 2013, P 2013 C EMP METH NA, V3, P413, DOI DOI 10.1146/ANNUREV.NEURO.26.041002.131047; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Lin C.-Y., 2004, NON TRADITIONAL REF; Liu, 2017, ARXIV170406933; Liu L., 2017, ARXIV171109357; Liu P. J., 2016, GOOGLE RES BLOG GOOG, V24; Liu S., 2017, CS585 PROJECT REPORT; Mihalcea R., 2004, P 2004 C EMP METH NA; Nallapati Ramesh, 2016, P 20 SIGNLL C COMP N, DOI DOI 10.18653/V1/K16-1028; Paulus R., 2017, ARXIV170504304; Rush Alexander M, 2015, P 2015 C EMP METH NA, DOI DOI 10.18653/V1/D15-1044; Sutskever Ilya, 2014, ADV NEURAL INFORM PR, V8, P3104, DOI DOI 10.1007/S10107-014-0839-0; Wang BN, 2016, PROCEEDINGS OF THE 54TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, P1288, DOI 10.18653/v1/p16-1122; Wang S, 2017, IEEE INT CONGR BIG, P305, DOI 10.1109/BigDataCongress.2017.46; Wu Y., 2017, P 2017 C EMP METH NA, P1778; Xiang B., 2016, SEQUENCE TO SEQUENCE; Yin Wenpeng, 2017, ARXIV170201923; Yu LT, 2017, THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2852; Zajic David, 2004, P 2004 DOC UND C DUC, P112 30 1 2 0 4 IEEE NEW YORK 345 E 47TH ST, NEW YORK, NY 10017 USA 1082-3409 978-1-5386-7449-9 PROC INT C TOOLS ART 2018 242 248 10.1109/ICTAI.2018.00045 7 Computer Science, Artificial Intelligence Computer Science BL9NP WOS:000457750200035 2021-09-15 J Balakrishnan, V; Champion, D; Barr, E; Kramer, M; Sengar, R; Bailes, M Balakrishnan, Vishnu; Champion, David; Barr, Ewan; Kramer, Michael; Sengar, Rahul; Bailes, Matthew Pulsar candidate identification using semi-supervised generative adversarial networks MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY English Article methods: data analysis; methods: statistical; pulsars: general DISCOVERY; ALGORITHM; SELECTION; SYSTEM Machine learning methods are increasingly helping astronomers identify new radio pulsars. However, they require a large amount of labelled data, which is time consuming to produce and biased. Here, we describe a Semi-supervised generative adversarial network, which achieves better classification performance than the standard supervised algorithms using majority unlabelled data sets. We achieved an accuracy and mean F-Score of 94.9 percent trained on only 100 labelled candidates and 5000 unlabelled candidates compared to our standard supervised baseline which scored at 81.1 percent and 82.7 percent, respectively. Our final model trained on a much larger labelled data set achieved an accuracy and mean F-score value of 99.2 percent and a recall rate of 99.7 percent. This technique allows for high-quality classification during the early stages of pulsar surveys on new instruments when limited labelled data are available. We open-source our work along with a new pulsar-candidate data set produced from the High Time Resolution Universe - South Low Latitude Survey. This data set has the largest number of pulsar detections of any public data set and we hope it will be a valuable tool for benchmarking future machine learning models. [Balakrishnan, Vishnu; Champion, David; Barr, Ewan; Kramer, Michael] Max Planck Inst Radioastron, Auf Dem Hugel 69, D-53121 Bonn, Germany; [Sengar, Rahul; Bailes, Matthew] Swinburne Univ Technol, Ctr Astrophys & Supercomp, POB 218, Hawthorn, Vic 3122, Australia Balakrishnan, V; Champion, D; Barr, E (corresponding author), Max Planck Inst Radioastron, Auf Dem Hugel 69, D-53121 Bonn, Germany. vishnu@mpifr-bonn.mpg.de; champion@mpifr-bonn.mpg.de; ebarr@mpifr-bonn.mpg.de Champion, David/0000-0003-1361-7723; Kramer, Michael/0000-0002-4175-2271 Australian GovernmentAustralian GovernmentCGIAR; Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) Program via Astronomy Australia Ltd (AAL) Observational data used in this work were made available by High Time Resolution Universe (HTRU) scientific collaboration. The Parkes Observatory, used in the collection of this data is part of the Australia Telescope National Facility, which is funded by the Australian Government for operation as a National Facility managed by CSIRO. The data analysis were performed on the OzSTAR national supercomputing facilities at Swinburne University of Technology and the HERCULES computing cluster operated by the Max Planck Computing & Data Facility (MPCDF). OzSTAR is funded under Astronomy National Collaborative Research Infrastructure Strategy (NCRIS) Program via Astronomy Australia Ltd (AAL). We would like to thank members of the open-source community formaintaining packages that were directly used for our work including NUMOY OLIPHANT (Harris et al. 2020), MATPLOTLIB HUNTER (2007), Seaborn Waskom et al. (2017), Scikit-learn Pedregosa et al. (2011), KERAS (Chollet et al. 2015), and TENSORFLOW (Abadi et al. 2016). Abadi M., 2016, TENSORFLOW LARGE SCA; Agarwal D, 2020, MON NOT R ASTRON SOC, V497, P1661, DOI 10.1093/mnras/staa1856; Bates SD, 2012, MON NOT R ASTRON SOC, V427, P1052, DOI 10.1111/j.1365-2966.2012.22042.x; Bethapudi S, 2018, ASTRON COMPUT, V23, P15, DOI 10.1016/j.ascom.2018.02.002; Bishop C.M., 2006, PATTERN RECOGN; Bue B. D., 2014, SOILLE PIERREED P C SOILLE PIERREED P C; Cameron AD, 2020, MON NOT R ASTRON SOC, V493, P1063, DOI 10.1093/mnras/staa039; Chollet F., 2015, KERAS; COOLEY JW, 1965, MATH COMPUT, V19, P297, DOI 10.2307/2003354; Cordes JM, 2006, ASTROPHYS J, V637, P446, DOI 10.1086/498335; Dai Z., 2017, P 31 INT C NEUR INF P 31 INT C NEUR INF, V30, P6513; Devine T. R., 2020, GRADUATE THESES DISS, P7727; Eatough RP, 2010, MON NOT R ASTRON SOC, V407, P2443, DOI 10.1111/j.1365-2966.2010.17082.x; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Guo P, 2019, MON NOT R ASTRON SOC, V490, P5424, DOI 10.1093/mnras/stz2975; Harris CR, 2020, NATURE, V585, P357, DOI 10.1038/s41586-020-2649-2; Harry IW, 2009, PHYS REV D, V80, DOI 10.1103/PhysRevD.80.104014; He KM, 2016, PROC CVPR IEEE, P770, DOI 10.1109/CVPR.2016.90; HULSE RA, 1975, ASTROPHYS J, V195, pL51, DOI 10.1086/181708; Hunter JD, 2007, COMPUT SCI ENG, V9, P90, DOI 10.1109/MCSE.2007.55; Jones D. L., 2012, P C 2012 IEEE AER C P C 2012 IEEE AER C, P1; Karras T., 2018, P IEEE CVF C COMP VI P IEEE CVF C COMP VI, P4401; Keith MJ, 2010, MON NOT R ASTRON SOC, V409, P619, DOI 10.1111/j.1365-2966.2010.17325.x; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Krishnan VV, 2020, SCIENCE, V367, P577, DOI 10.1126/science.aax7007; Lecun Y, 1998, P IEEE, V86, P2278, DOI 10.1109/5.726791; Lee KJ, 2013, MON NOT R ASTRON SOC, V433, P688, DOI 10.1093/mnras/stt758; LeNail A., 2019, J OPEN SOURCE SOFTW, V4, P747, DOI DOI 10.21105/JOSS.00747; Lyon RJ, 2016, MON NOT R ASTRON SOC, V459, P1104, DOI 10.1093/mnras/stw656; Ma Z., 2019, INT C DAT MIN BIG DA INT C DAT MIN BIG DA, P191; Manchester RN, 2001, MON NOT R ASTRON SOC, V328, P17, DOI 10.1046/j.1365-8711.2001.04751.x; Mustafa Mustafa, 2019, Computational Astrophysics and Cosmology, V6, DOI 10.1186/s40668-019-0029-9; Ng C, 2015, MON NOT R ASTRON SOC, V450, P2922, DOI 10.1093/mnras/stv753; Pedregosa F, 2011, J MACH LEARN RES, V12, P2825; Radford A., 2015, ARXIV PREPRINT ARXIV; Ransom S., 2011, ASCL1107017 ASCL1107017; Reed S., 2016, INT C MACH LEARN NY, V48, P1060; RUMELHART DE, 1986, NATURE, V323, P533, DOI 10.1038/323533a0; Salimans T., 2016, P ADV NEURAL INFORM, P2234; Schawinski K, 2017, MON NOT R ASTRON SOC, V467, pL110, DOI 10.1093/mnrasl/slx008; Simonyan K., 2014, ARXIV PREPRINT; STAELIN DH, 1969, P IEEE, V57, P724, DOI 10.1109/PROC.1969.7051; Stovall K, 2014, ASTROPHYS J, V791, DOI 10.1088/0004-637X/791/1/67; Szegedy Christian, 2016, PROC CVPR IEEE, P2818, DOI DOI 10.1109/CVPR.2016.308; Voisin G, 2020, ASTRON ASTROPHYS, V638, DOI 10.1051/0004-6361/202038104; Waskom M., 2017, MWASKOMSEABORN V0 8, DOI DOI 10.5281/ZENODO.883859; Wen ZG, 2016, ASTRON ASTROPHYS, V592, DOI 10.1051/0004-6361/201628214; WOLSZCZAN A, 1992, NATURE, V355, P145, DOI 10.1038/355145a0; Xu YH, 2018, MON NOT R ASTRON SOC, V476, P5579, DOI 10.1093/mnras/sty566; Zhu WW, 2014, ASTROPHYS J, V781, DOI 10.1088/0004-637X/781/2/117; Zingales T, 2018, ASTRON J, V156, DOI 10.3847/1538-3881/aae77c 51 1 1 1 1 OXFORD UNIV PRESS OXFORD GREAT CLARENDON ST, OXFORD OX2 6DP, ENGLAND 0035-8711 1365-2966 MON NOT R ASTRON SOC Mon. Not. Roy. Astron. Soc. JUL 2021 505 1 1180 1194 10.1093/mnras/stab1308 15 Astronomy & Astrophysics Astronomy & Astrophysics TG5OB WOS:000671453100079 hybrid, Green Submitted 2021-09-15 C Dering, ML; Tucker, CS Nie, JY; Obradovic, Z; Suzumura, T; Ghosh, R; Nambiar, R; Wang, C; Zang, H; BaezaYates, R; Hu, X; Kepner, J; Cuzzocrea, A; Tang, J; Toyoda, M Dering, Matthew L.; Tucker, Conrad S. Generative Adversarial Networks for Increasing the Veracity of Big Data 2017 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) IEEE International Conference on Big Data English Proceedings Paper IEEE International Conference on Big Data (IEEE Big Data) DEC 11-14, 2017 Boston, MA IEEE, IEEE Comp Soc, ELSEVIER, CISCO Generative Models; Big Data; Deep Learning; GANs; Sketches This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment. [Dering, Matthew L.] Penn State Univ, Comp Sci & Engn, University Pk, PA 16802 USA; [Tucker, Conrad S.] Penn State Univ, Engn Design & Ind Engn, University Pk, PA 16802 USA Dering, ML (corresponding author), Penn State Univ, Comp Sci & Engn, University Pk, PA 16802 USA. mld284@cse.psu.edu; ctucker4@psu.edu NSF DUE/IUSE [1449650]; DARPA FUN DESIGN [HR00111820008] The authors would like to acknowledge the NSF DUE/IUSE #1449650: Investigating the Impact of Co-Learning Systems in Providing Customized, Real-time Student Feedback and DARPA FUN DESIGN #HR00111820008. Beecks C, 2015, PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIG DATA, P2834, DOI 10.1109/BigData.2015.7364093; Blei DM, 2003, J MACH LEARN RES, V3, P993, DOI 10.1162/jmlr.2003.3.4-5.993; Bodnar T., 2016, IEEE T SYSTEMS MAN C; Bodnar T, 2014, IEEE INT CONF BIG DA, P636, DOI 10.1109/BigData.2014.7004286; Cao HA, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), P1301, DOI 10.1109/BigData.2016.7840734; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Fried D, 2014, IEEE INT CONF BIG DA, P778, DOI 10.1109/BigData.2014.7004305; Ganin Y, 2016, J MACH LEARN RES, V17; GATYS LA, 2016, PROC CVPR IEEE, P2414, DOI DOI 10.1109/CVPR.2016.265; Goodfellow I. J., 2014, ARXIV PREPRINT ARXIV; He HB, 2008, IEEE IJCNN, P1322, DOI 10.1109/IJCNN.2008.4633969; Jia XW, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), P1192, DOI 10.1109/BigData.2016.7840723; Jia XW, 2015, PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIG DATA, P837, DOI 10.1109/BigData.2015.7363830; Li XP, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), P931, DOI 10.1109/BigData.2016.7840689; Makhzani A., 2015, ARXIV151105644; Mansimov E, 2015, ARXIV151102793; Melville P., 2005, Information Fusion, V6, P99, DOI 10.1016/j.inffus.2004.04.001; Mukherjee T, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), P976, DOI 10.1109/BigData.2016.7840696; Nowling RJ, 2014, 2014 IEEE FOURTH INTERNATIONAL CONFERENCE ON BIG DATA AND CLOUD COMPUTING (BDCLOUD), P49, DOI 10.1109/BDCloud.2014.38; Odena A., 2016, ARXIV161009585; Oord A.v. d., 2016, ARXIV160106759; Papernot N, 2016, 1ST IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, P372, DOI 10.1109/EuroSP.2016.36; Papernot Nicolas, 2016, ARXIV160202697; Pourhabib A, 2015, J MACH LEARN RES, V16, P2695; Reed S., 2016, ARXIV160505396; Salimans T, 2016, ADV NEURAL INFORM PR, P2234, DOI DOI 10.5555/3157096.3157346; Sangkloy P, 2016, ACM T GRAPHIC, V35, DOI 10.1145/2897824.2925954; Schuh Michael A., 2014, 2014 IEEE International Conference on Big Data (Big Data), P53, DOI 10.1109/BigData.2014.7004404; Sutskever I, 2011, P 28 ANN INT C MACH, P1017; Vinyals O, 2015, PROC CVPR IEEE, P3156, DOI 10.1109/CVPR.2015.7298935; Wojnowiez M, 2016, 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), P3601, DOI 10.1109/BigData.2016.7841024 31 13 13 0 3 IEEE NEW YORK 345 E 47TH ST, NEW YORK, NY 10017 USA 2639-1589 978-1-5386-2715-0 IEEE INT CONF BIG DA 2017 2595 2602 8 Computer Science, Artificial Intelligence; Computer Science, Information Systems Computer Science BJ8DN WOS:000428073702073 2021-09-15 C Chae, DK; Kang, JS; Kim, SW; Choi, J Assoc Comp Machinery Chae, Dong-Kyu; Kang, Jin-Soo; Kim, Sang-Wook; Choi, Jaeho Rating Augmentation with Generative Adversarial Networks towards Accurate Collaborative Filtering WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019) English Proceedings Paper World Wide Web Conference (WWW) MAY 13-17, 2019 San Francisco, CA Assoc Comp Machinery, Microsoft, Amazon, Bloomberg, Google, Criteo AI Lab, CISCO, NTENT, Spotify, Yahoo Res, Wikimedia Fdn, Baidu, DiDi, eBay, Facebook, LinkedIn, Megagon Labs, Mix, Mozilla, Netflix Res, NE Univ, Khoury Coll Comp Sci, Pinterest, Quora, Visa Res, Walmart Labs, Airbnb, Letgo, Gordon & Betty Moore Fdn, Webcastor Collaborative filtering; generative adversarial networks; data sparsity; data augmentation; top-N recommendation RECOMMENDATION Generative Adversarial Networks (GAN) have not only achieved a big success in various generation tasks such as images, but also boosted the accuracy of classification tasks by generating additional labeled data, which is called data augmentation. In this paper, we propose a Rating Augmentation framework with GAN, named RAGAN, aiming to alleviate the data sparsity problem in collaborative filtering (CF), eventually improving recommendation accuracy significantly. We identify a unique challenge that arises when applying GAN to CF for rating augmentation: naive RAGAN tends to generate values biased towards high ratings. Then, we propose a refined version of RAGAN, named RAGAN(BT), which addresses this challenge successfully. Via our extensive experiments, we validate that our RAGAN(BT) is really effective to solve the data sparsity problem, thereby providing existing CF models with great improvement in accuracy under various situations such as basic top-N recommendation, long-tail item recommendation, and recommendation to cold-start users. [Chae, Dong-Kyu; Kang, Jin-Soo; Kim, Sang-Wook] Hanyang Univ, Seoul, South Korea; [Choi, Jaeho] NAVER Corp, Seongnam, South Korea Kim, SW (corresponding author), Hanyang Univ, Seoul, South Korea. kyu899@hanyang.ac.kr; jensoo7023@hanyang.ac.kr; wook@hanyang.ac.kr; choi.jaeho@navercorp.com National Research Foundation of Korea (NRF) - Korea government (MSIT: Ministry of Science and ICT) [NRF-2017R1A2B3004581]; Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) - Ministry of Science and ICT [NRF-2017M3C4A7083678]; Naver Corporation This work was supported by (1) the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT: Ministry of Science and ICT) (No. NRF-2017R1A2B3004581) and (2) Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT (No. NRF-2017M3C4A7083678). Also, we thank the Naver Corporation for their support including computing environment and data, which helped us greatly in performing this research successfully. Antoniou A., 2017, ARXIV PREPRINT ARXIV; Bousmalis K, 2017, PROC CVPR IEEE, P95, DOI 10.1109/CVPR.2017.18; Chae DK, 2018, CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, P137, DOI 10.1145/3269206.3271743; Choi E., 2017, ARXIV170306490; Cremonesi Paolo, 2010, P 4 ACM C REC SYST, P39, DOI DOI 10.1145/1864708.1864721; Dalvi N.N., 2013, P 7 INT AAAI C WEBL, V7, P110; Donahue C., 2018, ARXIV180204208; Frid-Adar Maayan, 2018, ARXIV180301229; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; He XN, 2017, PROCEEDINGS OF THE 26TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'17), P173, DOI 10.1145/3038912.3052569; Hernandez-Lobato J. M., 2014, P 31 INT C MACH LEAR, P1512; Hong DZ, 2015, BUILDSYS'15 PROCEEDINGS OF THE 2ND ACM INTERNATIONAL CONFERENCE ON EMBEDDED SYSTEMS FOR ENERGY-EFFICIENT BUILT, P123, DOI 10.1145/2821650.2821657; Hu N, 2009, COMMUN ACM, V52, P144, DOI 10.1145/1562764.1562800; Hwang WS, 2016, PROC INT CONF DATA, P349, DOI 10.1109/ICDE.2016.7498253; Kabbur S, 2013, 19TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'13), P659; Koren Y, 2008, P 14 ACM SIGKDD INT, P426, DOI DOI 10.1145/1401890.1401944; Koren Y, 2009, COMPUTER, V42, P30, DOI 10.1109/MC.2009.263; Lee J, 2019, IEEE T KNOWL DATA EN, V31, P3, DOI 10.1109/TKDE.2017.2698461; Lee SC, 2018, IEICE T INF SYST, VE101D, P244, DOI 10.1587/transinf.2017EDL8039; Lee Yeon-Chang, 2018, P 2018 AAAI INT C AR; Lee Y, 2018, WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), P783, DOI 10.1145/3178876.3186159; Liang DW, 2016, PROCEEDINGS OF THE 25TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB (WWW'16), P951, DOI 10.1145/2872427.2883090; Mirza M., 2014, ARXIV14111784; Pan R, 2008, IEEE DATA MINING, P502, DOI 10.1109/ICDM.2008.16; Park C, 2016, INFORM SCIENCES, V374, P100, DOI 10.1016/j.ins.2016.09.024; Radford A., 2015, ARXIV PREPRINT ARXIV; Rendle Steffen, 2009, P 25 C UNC ART INT, P452; Sarwar Badrul, 2001, P 10 INT C WORLD WID, P285, DOI DOI 10.1145/371920.372071; Sedhain S, 2015, WWW'15 COMPANION: PROCEEDINGS OF THE 24TH INTERNATIONAL CONFERENCE ON WORLD WIDE WEB, P111, DOI 10.1145/2740908.2742726; Steck Harald, 2010, P 16 ACM SIGKDD INT, P713, DOI DOI 10.1145/1835804.1835895; Tang J., 2012, P 5 ACM INT C WEB SE, P93; Wang Hongwei, 2017, ARXIV171108267; Wang J, 2017, SIGIR'17: PROCEEDINGS OF THE 40TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, P515, DOI 10.1145/3077136.3080786; Wu Y, 2016, PROCEEDINGS OF THE NINTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM'16), P153, DOI 10.1145/2835776.2835837; Yu LT, 2017, THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2852 35 4 4 0 1 ASSOC COMPUTING MACHINERY NEW YORK 1515 BROADWAY, NEW YORK, NY 10036-9998 USA 978-1-4503-6674-8 2019 2616 2622 10.1145/3308558.3313413 7 Computer Science, Theory & Methods Computer Science BN5LD WOS:000483508402066 2021-09-15 C Abbas, W; Shakeel, MH; Khurshid, N; Taj, M Gedeon, T; Wong, KW; Lee, M Abbas, Waseem; Shakeel, Muhammad Haroon; Khurshid, Numan; Taj, Murtaza Patch-Based Generative Adversarial Network Towards Retinal Vessel Segmentation NEURAL INFORMATION PROCESSING (ICONIP 2019), PT IV Communications in Computer and Information Science English Proceedings Paper 26th International Conference on Neural Information Processing (ICONIP) of the Asia-Pacific-Neural-Network-Society (APNNS) DEC 12-15, 2019 Sydney, AUSTRALIA Asia Pacific Neural Network Soc Deep Learning; Generative Adversarial Network; Segmentation; Retinal Vessels BLOOD-VESSELS; MATCHED-FILTER; IMAGES; EXTRACTION Retinal blood vessels are considered to be the reliable diagnostic biomarkers of ophthalmologic and diabetic retinopathy. Monitoring and diagnosis totally depends on expert analysis of both thin and thick retinal vessels which has recently been carried out by various artificial intelligent techniques. Existing deep learning methods attempt to segment retinal vessels using a unified loss function optimized for both thin and thick vessels with equal importance. Due to variable thickness, biased distribution, and difference in spatial features of thin and thick vessels, unified loss function are more influential towards identification of thick vessels resulting in weak segmentation. To address this problem, a conditional patch-based generative adversarial network is proposed which utilizes a generator network and a patch-based discriminator network conditioned on the sample data with an additional loss function to learn both thin and thick vessels. Experiments are conducted on publicly available STARE and DRIVE datasets which show that the proposed model outperforms the state-of-the-art methods. [Abbas, Waseem] Mentor, Cloud Applicat Solut Div, Lahore, Pakistan; [Shakeel, Muhammad Haroon; Khurshid, Numan; Taj, Murtaza] Lahore Univ Management Sci LUMS, Syed Babar Ali Sch Sci & Engn, Dept Comp Sci, Lahore, Pakistan Shakeel, MH (corresponding author), Lahore Univ Management Sci LUMS, Syed Babar Ali Sch Sci & Engn, Dept Comp Sci, Lahore, Pakistan. muhammad_waseem@mentor.com; 15030040@lums.edu.pk; 15060051@lums.edu.pk; murtaza.taj@lums.edu.pk Abbas W, 2019, INT CONF ACOUST SPEE, P1408, DOI 10.1109/ICASSP.2019.8683776; Abramoff Michael D, 2010, IEEE Rev Biomed Eng, V3, P169, DOI 10.1109/RBME.2010.2084567; Azzopardi G, 2015, MED IMAGE ANAL, V19, P46, DOI 10.1016/j.media.2014.08.002; Badrinarayanan V, 2017, IEEE T PATTERN ANAL, V39, P2481, DOI 10.1109/TPAMI.2016.2644615; Dasgupta A, 2017, 2017 IEEE 14TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2017), P248, DOI 10.1109/ISBI.2017.7950512; Fraz MM, 2012, COMPUT METH PROG BIO, V108, P600, DOI 10.1016/j.cmpb.2011.08.009; Fraz MM, 2012, COMPUT METH PROG BIO, V108, P407, DOI 10.1016/j.cmpb.2012.03.009; Fraz MM, 2012, IEEE T BIO-MED ENG, V59, P2538, DOI 10.1109/TBME.2012.2205687; Fu HZ, 2016, I S BIOMED IMAGING, P698, DOI 10.1109/ISBI.2016.7493362; Hoover A, 2000, IEEE T MED IMAGING, V19, P203, DOI 10.1109/42.845178; Orlando JI, 2017, IEEE T BIO-MED ENG, V64, P16, DOI 10.1109/TBME.2016.2535311; Li QL, 2016, IEEE T MED IMAGING, V35, P109, DOI 10.1109/TMI.2015.2457891; Liskowski P, 2016, IEEE T MED IMAGING, V35, P2369, DOI 10.1109/TMI.2016.2546227; Marin D, 2011, IEEE T MED IMAGING, V30, P146, DOI 10.1109/TMI.2010.2064333; Melin~s~cak M., 2015, INT C COMP VIS THEOR INT C COMP VIS THEOR; Nazir U., 2019, P IEEE C COMP VIS PA, P39; Niemeijer M., 2004, METHODS EVALUATING S; Patton N, 2006, PROG RETIN EYE RES, V25, P99, DOI 10.1016/j.preteyeres.2005.07.001; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Roychowdhury S, 2015, IEEE T BIO-MED ENG, V62, P1738, DOI 10.1109/TBME.2015.2403295; Yan ZQ, 2019, IEEE J BIOMED HEALTH, V23, P1427, DOI 10.1109/JBHI.2018.2872813; Yin BJ, 2015, MED IMAGE ANAL, V26, P232, DOI 10.1016/j.media.2015.09.002; You XG, 2011, PATTERN RECOGN, V44, P2314, DOI 10.1016/j.patcog.2011.01.007; Zhang B, 2010, COMPUT BIOL MED, V40, P438, DOI 10.1016/j.compbiomed.2010.02.008; Zhang J, 2016, IEEE T MED IMAGING, V35, P2631, DOI 10.1109/TMI.2016.2587062 25 0 0 0 0 SPRINGER INTERNATIONAL PUBLISHING AG CHAM GEWERBESTRASSE 11, CHAM, CH-6330, SWITZERLAND 1865-0929 1865-0937 978-3-030-36808-1; 978-3-030-36807-4 COMM COM INF SC 2019 1142 49 56 10.1007/978-3-030-36808-1_6 8 Computer Science, Artificial Intelligence; Computer Science, Theory & Methods Computer Science BR4FR WOS:000651201400006 Green Submitted 2021-09-15 C Roy, D; Mukherjee, D; Chanda, B IEEE COMP SOC Roy, Debapriya; Mukherjee, Diganta; Chanda, Bhabatosh An Unsupervised Approach towards Varying Human Skin Tone Using Generative Adversarial Networks 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) International Conference on Pattern Recognition English Proceedings Paper 25th International Conference on Pattern Recognition (ICPR) JAN 10-15, 2021 ELECTR NETWORK Int Assoc Pattern Recognit, IEEE Comp Soc, Italian Assoc Comp Vis Pattern Recognit & Machine Learning With the increasing popularity of augmented and virtual reality, retailers are now focusing more towards customer satisfaction to increase the amount of sales. Although augmented reality is not a new concept but it has gained much needed attention over the past few years. Our present work is targeted towards this direction which may be used to enhance user experience in various virtual and augmented reality based applications. We propose a model to change skin tone of a person. Given any input image of a person or a group of persons with some value indicating the desired change of skin color towards fairness or darkness, this method can change the skin tone of the persons in the image. This is an unsupervised method and also unconstrained in terms of pose, illumination, number of persons in the image etc. The goal of this work is to reduce the time and effort which is generally required for changing the skin tone using existing applications (e.g., Photoshop) by professionals or novice. To establish the efficacy of this method we have compared our result with that of some popular photo editor and also with the result of some existing benchmark method related to human attribute manipulation. Rigorous experiments on different datasets show the effectiveness of this method in terms of synthesizing perceptually convincing outputs. [Roy, Debapriya; Mukherjee, Diganta; Chanda, Bhabatosh] Indian Stat Inst, Kolkata, India Roy, D (corresponding author), Indian Stat Inst, Kolkata, India. debapriyakundu1@gmail.com; diganta@isical.ac.in; chanda@isical.ac.in Al-Mohair HK, 2015, APPL SOFT COMPUT, V33, P337, DOI 10.1016/j.asoc.2015.04.046; Brand J, 2000, INT C PATT RECOG, P1056, DOI 10.1109/ICPR.2000.905653; Chakravarti Laha, 1967, ROY HDB METHODS APPL, V1; Cheng ZH, 2017, IEEE INT CONF COMM, P1030, DOI 10.1109/ICCW.2017.7962794; Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848; Dong H, 2019, IEEE I CONF COMP VIS, P9025, DOI 10.1109/ICCV.2019.00912; FAN WB, 2009, INT C ARTS TECHN, P157; Gong K, 2017, PROC CVPR IEEE, P6757, DOI 10.1109/CVPR.2017.715; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; He ZL, 2019, IEEE T IMAGE PROCESS, V28, P5464, DOI 10.1109/TIP.2019.2916751; Heusel M., 2017, ADV NEURAL INFORM PR, P6626; Isola P, 2017, ARXIV161107004V, DOI DOI 10.1109/CVPR.2017.632; Jo Y, 2019, IEEE I CONF COMP VIS, P1745, DOI 10.1109/ICCV.2019.00183; Johnson J, 2016, LECT NOTES COMPUT SC, V9906, P694, DOI 10.1007/978-3-319-46475-6_43; Kakumanu P, 2007, PATTERN RECOGN, V40, P1106, DOI 10.1016/j.patcog.2006.06.010; Kanzawa Y., 2011, P IAPR C MACH VIS AP, V12, P14; Kolkur S, 2017, ADV INTEL SYS RES, V137, P324; Liu ZW, 2016, PROC CVPR IEEE, P1096, DOI 10.1109/CVPR.2016.124; Naji S, 2019, ARTIF INTELL REV, V52, P1041, DOI 10.1007/s10462-018-9664-9; Newell A, 2016, LECT NOTES COMPUT SC, V9912, P483, DOI 10.1007/978-3-319-46484-8_29; Nunez A.S., 2008, P 2008 IEEE GEOSC RE, V2; Osindero S., 2014, CONDITIONAL GENERATI; Salimans T., 2016, ADV NEURAL INFORM PR; Shaik KB, 2015, PROCEDIA COMPUT SCI, V57, P41, DOI 10.1016/j.procs.2015.07.362; Szegedy C, 2016, PROC CVPR IEEE, P2818, DOI 10.1109/CVPR.2016.308; Tan WR, 2012, IEEE T IND INFORM, V8, P138, DOI 10.1109/TII.2011.2172451; Thao NT, 2018, MATH PROBL ENG, V2018, DOI 10.1155/2018/5754604; Wang YL, 2018, IEEE WINT CONF APPL, P112, DOI 10.1109/WACV.2018.00019; Wang Z, 2003, CONF REC ASILOMAR C, P1398; Wang Z, 2004, IEEE T IMAGE PROCESS, V13, P600, DOI 10.1109/TIP.2003.819861; Zhang JC, 2018, PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), P392, DOI 10.1145/3240508.3240594; Zuo HQ, 2017, IEEE SIGNAL PROC LET, V24, P289, DOI 10.1109/LSP.2017.2654803 32 0 0 0 0 IEEE COMPUTER SOC LOS ALAMITOS 10662 LOS VAQUEROS CIRCLE, PO BOX 3014, LOS ALAMITOS, CA 90720-1264 USA 1051-4651 978-1-7281-8808-9 INT C PATT RECOG 2021 10681 10688 10.1109/ICPR48806.2021.9412852 8 Computer Science, Artificial Intelligence; Engineering, Electrical & Electronic; Imaging Science & Photographic Technology Computer Science; Engineering; Imaging Science & Photographic Technology BS0DY WOS:000681331403027 Green Submitted 2021-09-15 J Rammy, SA; Abbas, W; Hassan, NU; Raza, A; Zhang, W Rammy, Sadaqat Ali; Abbas, Waseem; Hassan, Naqy-Ul; Raza, Asif; Zhang, Wu CPGAN: Conditional patch-based generative adversarial network for retinal vessel segmentation IET IMAGE PROCESSING English Article medical image processing; biomedical optical imaging; image segmentation; blood vessels; learning (artificial intelligence); eye; sensitivity analysis; retinal blood vessels; diagnostic biomarker; unified loss function; patch-based generative adversarial network-based technique; generator network; conditional patch-based generative adversarial network; retinal vessel segmentation; CPGAN; deep learning methods; diabetic retinopathy; ophthalmologic retinopathy; spatial features; biased distribution; fundoscopic images; receiver operating characteristic curves BLOOD-VESSELS; MATCHED-FILTER; IMAGES; EXTRACTION; MODEL Retinal blood vessels, the diagnostic bio-marker of ophthalmologic and diabetic retinopathy, utilise thick and thin vessels for diagnostic and monitoring purposes. The existing deep learning methods attempt to segment the retinal vessels using a unified loss function. However, a difference in spatial features of thick and thin vessels and a biased distribution creates an imbalanced thickness, rendering the unified loss function to be useful only for thick vessels. To address this challenge, a patch-based generative adversarial network-based technique is proposed which iteratively learns both thick and thin vessels in fundoscopic images. It introduces an additional loss function that allows the generator network to learn thin and thick vessels, while the discriminator network assists in segmenting out both vessels as a combined objective function. Compared with state-of-the-art techniques, the proposed model demonstrates the enhanced accuracy, sensitivity, specificity, and area under the receiver operating characteristic curves on STARE, DRIVE, and CHASEDB1 datasets. [Rammy, Sadaqat Ali; Zhang, Wu] Shanghai Univ, Sch Comp Engn & Sci, Shanghai, Peoples R China; [Abbas, Waseem] Mentor Siemens Business, Cloud Applict Solut Div, Lahore, Pakistan; [Hassan, Naqy-Ul] Comsats Univ, Dept Comp Sci, Vehari Campus, Islamabad, Pakistan; [Raza, Asif] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Minhang Campus, Shanghai, Peoples R China; [Zhang, Wu] Shanghai Inst Appl Math & Mech, Shanghai, Peoples R China Zhang, W (corresponding author), Shanghai Inst Appl Math & Mech, Shanghai, Peoples R China. wzhang@shu.edu.cn raza, asif/AAV-2240-2020 raza, asif/0000-0002-7278-2801 key project of the National Natural Science Foundation of ChinaNational Natural Science Foundation of China (NSFC) [91630206] This work was supported by key project of the National Natural Science Foundation of China [grant number 91630206]. Alom M Z, 2018, ARXIV PREPRINT ARXIV; Azzopardi G, 2015, MED IMAGE ANAL, V19, P46, DOI 10.1016/j.media.2014.08.002; Dasgupta A, 2017, 2017 IEEE 14TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2017), P248, DOI 10.1109/ISBI.2017.7950512; Fraz MM, 2012, COMPUT METH PROG BIO, V108, P600, DOI 10.1016/j.cmpb.2011.08.009; Fraz MM, 2012, COMPUT METH PROG BIO, V108, P407, DOI 10.1016/j.cmpb.2012.03.009; Fraz MM, 2012, IEEE T BIO-MED ENG, V59, P2538, DOI 10.1109/TBME.2012.2205687; Fu HZ, 2016, I S BIOMED IMAGING, P698, DOI 10.1109/ISBI.2016.7493362; Galdran A, 2019, I S BIOMED IMAGING, P556, DOI 10.1109/ISBI.2019.8759380; Hoover A, 2000, IEEE T MED IMAGING, V19, P203, DOI 10.1109/42.845178; Huazhu Fu, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P132, DOI 10.1007/978-3-319-46723-8_16; Orlando JI, 2017, IEEE T BIO-MED ENG, V64, P16, DOI 10.1109/TBME.2016.2535311; Jin QG, 2019, KNOWL-BASED SYST, V178, P149, DOI 10.1016/j.knosys.2019.04.025; Karn PK, 2019, IET IMAGE PROCESS, V13, P440, DOI 10.1049/iet-ipr.2018.5413; Khan MAU, 2019, PATTERN ANAL APPL, V22, P1177, DOI 10.1007/s10044-018-0696-1; Khan MAU, 2019, PATTERN ANAL APPL, V22, P583, DOI 10.1007/s10044-017-0661-4; Laibacher T., 2019, P IEEE C COMP VIS PA; Li QL, 2016, IEEE T MED IMAGING, V35, P109, DOI 10.1109/TMI.2015.2457891; LINDEBERG T, 1990, IEEE T PATTERN ANAL, V12, P234, DOI 10.1109/34.49051; Liskowski P, 2016, IEEE T MED IMAGING, V35, P2369, DOI 10.1109/TMI.2016.2546227; Maninis Kevis-Kokitsi, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P140, DOI 10.1007/978-3-319-46723-8_17; Marin D, 2011, IEEE T MED IMAGING, V30, P146, DOI 10.1109/TMI.2010.2064333; Melinak M, 2015, 10 INT C COMP VIS TH; Oliveira A, 2018, EXPERT SYST APPL, V112, P229, DOI 10.1016/j.eswa.2018.06.034; Owen CG, 2009, INVEST OPHTH VIS SCI, V50, P2004, DOI 10.1167/iovs.08-3018; Ricci E, 2007, IEEE T MED IMAGING, V26, P1357, DOI 10.1109/TMI.2007.898551; Roychowdhury S, 2015, IEEE T BIO-MED ENG, V62, P1738, DOI 10.1109/TBME.2015.2403295; Sathananthavathi V, 2018, IET IMAGE PROCESS, V12, P2075, DOI 10.1049/iet-ipr.2017.1266; Soares JVB, 2006, IEEE T MED IMAGING, V25, P1214, DOI 10.1109/TMI.2006.879967; Staal J, 2004, IEEE T MED IMAGING, V23, P501, DOI 10.1109/TMI.2004.825627; Thangaraj S, 2018, IET IMAGE PROCESS, V12, P669, DOI 10.1049/iet-ipr.2017.0284; Wang C, 2019, ENTROPY-SWITZ, V21, DOI 10.3390/e21020168; Wong TY, 2004, OPHTHALMOLOGY, V111, P1183, DOI 10.1016/j.ophtha.2003.09.039; Xue LY, 2019, FRONT INFORM TECH EL, V20, P1075, DOI 10.1631/FITEE.1700404; Yan ZQ, 2019, IEEE J BIOMED HEALTH, V23, P1427, DOI 10.1109/JBHI.2018.2872813; Yan ZQ, 2018, IEEE T MED IMAGING, V37, P1045, DOI 10.1109/TMI.2017.2778748; Yang YY, 2019, IET IMAGE PROCESS, V13, P1, DOI 10.1049/iet-ipr.2018.5173; Yin BJ, 2015, MED IMAGE ANAL, V26, P232, DOI 10.1016/j.media.2015.09.002; You XG, 2011, PATTERN RECOGN, V44, P2314, DOI 10.1016/j.patcog.2011.01.007; Zhang B, 2010, COMPUT BIOL MED, V40, P438, DOI 10.1016/j.compbiomed.2010.02.008; Zhang J, 2016, IEEE T MED IMAGING, V35, P2631, DOI 10.1109/TMI.2016.2587062; Zhang YS, 2018, LECT NOTES COMPUT SC, V11071, P83, DOI 10.1007/978-3-030-00934-2_10; Zhou L, 2018, IET IMAGE PROCESS, V12, P563, DOI 10.1049/iet-ipr.2017.0636 42 3 3 1 14 INST ENGINEERING TECHNOLOGY-IET HERTFORD MICHAEL FARADAY HOUSE SIX HILLS WAY STEVENAGE, HERTFORD SG1 2AY, ENGLAND 1751-9659 1751-9667 IET IMAGE PROCESS IET Image Process. MAY 11 2020 14 6 1081 1090 10.1049/iet-ipr.2019.1007 10 Computer Science, Artificial Intelligence; Engineering, Electrical & Electronic; Imaging Science & Photographic Technology Computer Science; Engineering; Imaging Science & Photographic Technology LJ9BU WOS:000530456000010 Bronze 2021-09-15 J Suh, S; Lee, H; Lukowicz, P; Lee, YO Suh, Sungho; Lee, Haebom; Lukowicz, Paul; Lee, Yong Oh CEGAN: Classification Enhancement Generative Adversarial Networks for unraveling data imbalance problems NEURAL NETWORKS English Article Imbalanced classification; Data augmentation; Generative adversarial networks; Classification enhancement; Ambiguous classes DATA SETS; SMOTE The data imbalance problem in classification is a frequent but challenging task. In real-world datasets, numerous class distributions are imbalanced and the classification result under such condition reveals extreme bias in the majority data class. Recently, the potential of GAN as a data augmentation method on minority data has been studied. In this paper, we propose a classification enhancement generative adversarial networks (CEGAN) to enhance the quality of generated synthetic minority data and more importantly, to improve the prediction accuracy in data imbalanced condition. In addition, we propose an ambiguity reduction method using the generated synthetic minority data for the case of multiple similar classes that are degenerating the classification accuracy. The proposed method is demonstrated with five benchmark datasets. The results indicate that approximating the real data distribution using CEGAN improves the classification performance significantly in data imbalanced conditions compared with various standard data augmentation methods. (c) 2020 Elsevier Ltd. All rights reserved. [Suh, Sungho; Lee, Haebom; Lee, Yong Oh] Europe Forsch Gesell mbH, Smart Convergence Grp, Korea Inst Sci & Technol, D-66123 Saarbrucken, Germany; [Suh, Sungho; Lukowicz, Paul] TU Kaiserslautern, Dept Comp Sci, D-67663 Kaiserslautern, Germany; [Lukowicz, Paul] German Res Ctr Artificial Intelligence DFKI, D-67663 Kaiserslautern, Germany Lee, YO (corresponding author), Europe Forsch Gesell mbH, Smart Convergence Grp, Korea Inst Sci & Technol, D-66123 Saarbrucken, Germany. yongoh.lee@kist-europe.de Suh, Sungho/AAQ-3354-2021 Suh, Sungho/0000-0003-3723-1980; Lee, Haebom/0000-0001-9250-3526 Korea Institute of Science and Technology Europe Institutional Program [12020] This research was supported by Korea Institute of Science and Technology Europe Institutional Program (Project No. 12020). Arjovsky M., 2017, ARXIV170107875, P214; Arthur D, 2007, PROCEEDINGS OF THE EIGHTEENTH ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS, P1027; Barua S, 2014, IEEE T KNOWL DATA EN, V26, P405, DOI 10.1109/TKDE.2012.232; Beijbom O, 2012, PROC CVPR IEEE, P1170, DOI 10.1109/CVPR.2012.6247798; Bengio Y., 2013, ICML; Blagus R, 2012, 2012 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2012), VOL 2, P89, DOI 10.1109/ICMLA.2012.183; Buda M, 2018, NEURAL NETWORKS, V106, P249, DOI 10.1016/j.neunet.2018.07.011; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Cohen G., 2017, EMNIST EXTENSION MNI; Darlow L. N., 2018, ARXIV181003505; Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Gao X., 2019, NEUROCOMPUTING; Glorot X., 2010, P 13 INT C ARTIF INT, P249; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Graves SJ, 2016, REMOTE SENS-BASEL, V8, DOI 10.3390/rs8020161; Grzymala-Busse JW, 2004, COG TECH, P543; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; Guo HX, 2017, EXPERT SYST APPL, V73, P220, DOI 10.1016/j.eswa.2016.12.035; Han H, 2005, LECT NOTES COMPUT SC, V3644, P878, DOI 10.1007/11538059_91; He HB, 2009, IEEE T KNOWL DATA EN, V21, P1263, DOI 10.1109/TKDE.2008.239; He HB, 2008, IEEE IJCNN, P1322, DOI 10.1109/IJCNN.2008.4633969; Holland O, 2016, 2016 23RD INTERNATIONAL CONFERENCE ON TELECOMMUNICATIONS (ICT), DOI 10.1109/ICT.2016.7500442; Holzinger A., 2016, MACHINE LEARNING HLT, V9605; Japkowicz N., 2002, Intelligent Data Analysis, V6, P429; Jeatrakul P, 2010, LECT NOTES COMPUT SC, V6444, P152, DOI 10.1007/978-3-642-17534-3_19; Johnson BA, 2013, INT J REMOTE SENS, V34, P6969, DOI 10.1080/01431161.2013.810825; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Krizhevsky A, 2009, LEARNING MULTIPLE LA; Krizhevsky A., 2012, ADV NEURAL INFORM PR, V25, P1097; Kubat M, 1998, MACH LEARN, V30, P195, DOI 10.1023/A:1007452223027; Larsen A. B. L., 2015, ARXIV151209300; Lecun Y, 1998, P IEEE, V86, P2278, DOI 10.1109/5.726791; LECUN Y, 2010, AT T LABS, V2, P18; Lee M, 2019, IEEE ACCESS, V7, P28158, DOI 10.1109/ACCESS.2019.2899108; Lee YO, 2017, IEEE INT CONF BIG DA, P3248, DOI 10.1109/BigData.2017.8258307; [刘天羽 LIU Tian-yu], 2010, [计算机工程与科学, Computer Engineering and Science], V32, P150; Lu Xiaoyong, 2015, INT J MACHINE LEARNI, V5, P454, DOI DOI 10.18178/ijmlc.2015.5.6.551; Ma KD, 2016, PROC CVPR IEEE, P1664, DOI 10.1109/CVPR.2016.184; Mac Namee B, 2002, ARTIF INTELL MED, V24, P51, DOI 10.1016/S0933-3657(01)00092-6; Mariani G., 2018, ARXIV180309655; Mirza M., 2014, ARXIV14111784; Ng WWY, 2015, IEEE T CYBERNETICS, V45, P2402, DOI 10.1109/TCYB.2014.2372060; Nguyen Hien M., 2011, International Journal of Knowledge Engineering and Soft Data Paradigms, V3, P4, DOI 10.1504/IJKESDP.2011.039875; Odena A, 2017, PR MACH LEARN RES, V70; Qian N, 1999, NEURAL NETWORKS, V12, P145, DOI 10.1016/S0893-6080(98)00116-6; Radford A., 2015, ARXIV PREPRINT ARXIV; Ramentol E, 2012, KNOWL INF SYST, V33, P245, DOI 10.1007/s10115-011-0465-6; Salimans T, 2016, ADV NEURAL INFORM PR, P2234, DOI DOI 10.5555/3157096.3157346; Schmidhuber J, 2020, NEURAL NETWORKS, V127, P58, DOI 10.1016/j.neunet.2020.04.008; Simonyan K., 2014, ARXIV PREPRINT; Suh S, 2019, APPL SCI-BASEL, V9, DOI 10.3390/app9040746; van der Maaten L, 2008, J MACH LEARN RES, V9, P2579; Van Horn G, 2017, ARXIV170706642, V1; Wang QF, 2019, IEEE ACCESS, V7, P18450, DOI 10.1109/ACCESS.2019.2896409; Wang Z, 2003, CONF REC ASILOMAR C, P1398; WOLD S, 1987, CHEMOMETR INTELL LAB, V2, P37, DOI 10.1016/0169-7439(87)80084-9; Xiao H, 2017, IEEE ASME INT C ADV, P1700, DOI 10.1109/AIM.2017.8014263; Xiao JX, 2010, PROC CVPR IEEE, P3485, DOI 10.1109/CVPR.2010.5539970; Xie JG, 2007, PATTERN RECOGN, V40, P557, DOI 10.1016/j.patcog.2006.01.009; Zhao XM, 2008, PROTEINS, V70, P1125, DOI 10.1002/prot.21870 60 3 3 3 7 PERGAMON-ELSEVIER SCIENCE LTD OXFORD THE BOULEVARD, LANGFORD LANE, KIDLINGTON, OXFORD OX5 1GB, ENGLAND 0893-6080 1879-2782 NEURAL NETWORKS Neural Netw. JAN 2021 133 69 86 10.1016/j.neunet.2020.10.004 18 Computer Science, Artificial Intelligence; Neurosciences Computer Science; Neurosciences & Neurology PB9DW WOS:000596613900008 33125919 2021-09-15 J Li, X; Rosman, G; Gilitschenski, I; Vasile, CI; DeCastro, JA; Karaman, S; Rus, D Li, Xiao; Rosman, Guy; Gilitschenski, Igor; Vasile, Cristian-Ioan; DeCastro, Jonathan A.; Karaman, Sertac; Rus, Daniela Vehicle Trajectory Prediction Using Generative Adversarial Network With Temporal Logic Syntax Tree Features IEEE ROBOTICS AND AUTOMATION LETTERS English Article Autonomous-driving; prediction; temporal logic In this work, we propose a novel approach for integrating rules into traffic agent trajectory prediction. Consideration of rules is important for understanding how people behave-yet, it cannot be assumed that rules are always followed. To address this challenge, we evaluate different approaches of integrating rules as inductive biases into deep learning-based prediction models. We propose a framework based on generative adversarial networks that uses tools from formal methods, namely signal temporal logic and syntax trees. This allows us to leverage information on rule obedience as features in neural networks and improves prediction accuracy without biasing towards lawful behavior. We evaluate our method on a real-world driving dataset and show improvement in performance over off-the-shelf predictors. [Li, Xiao; Gilitschenski, Igor; Rus, Daniela] MIT, Comp Sci & Artificial Intelligence Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA; [Rosman, Guy; DeCastro, Jonathan A.] Toyota Res Inst, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA; [Vasile, Cristian-Ioan] Lehigh Univ, Dept Mech Engn & Mech, Bethlehem, PA 18015 USA; [Karaman, Sertac] MIT, Lab Informat & Decis Syst, 77 Massachusetts Ave, Cambridge, MA 02139 USA Li, X (corresponding author), MIT, Comp Sci & Artificial Intelligence Lab, 77 Massachusetts Ave, Cambridge, MA 02139 USA. xiaoli@mit.edu; rosman@csail.mit.edu; igilitschenstts@mit.edu; cvasile@lehigh.edu; jad455@cornell.edu; sertac@mit.edu; rus@csail.mit.edu Vasile, Cristian-Ioan/0000-0002-1132-1462; DeCastro, Jonathan/0000-0002-0933-9671 Toyota Research Institute (TRI) This work has been supported by the Toyota Research Institute (TRI). Arechiga N, 2019, IEEE INT VEH SYM, P58, DOI 10.1109/IVS.2019.8813875; Bansal M., 2019, ARXIV181203079; Caesar H., 2020, P IEEECVF C COMPUTER, P11621; Censi A, 2019, IEEE INT CONF ROBOT, P8536, DOI 10.1109/ICRA.2019.8794364; Cui HG, 2019, IEEE INT CONF ROBOT, P2090, DOI 10.1109/ICRA.2019.8793868; Dasgupta N, 2020, SSRN J, V11, P2020, DOI [10.2139/ssrn.3588585, DOI 10.2139/SSRN.3588585]; Deo N., 2018, 2018 IEEE INT VEH S, P1179; Deo N, 2018, IEEE COMPUT SOC CONF, P1549, DOI 10.1109/CVPRW.2018.00196; Ding WC, 2019, IEEE INT CONF ROBOT, P9610, DOI 10.1109/ICRA.2019.8793568; Donze A, 2010, LECT NOTES COMPUT SC, V6246, P92, DOI 10.1007/978-3-642-15297-9_9; Eason Wang, 2020, KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, P2340, DOI 10.1145/3394486.3403283; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gupta A, 2018, PROC CVPR IEEE, P2255, DOI 10.1109/CVPR.2018.00240; Hasanbeig M, 2019, IEEE DECIS CONTR P, P5338, DOI 10.1109/CDC40024.2019.9028919; Huang X, 2019, IEEE INT CONF ROBOT, P9718, DOI 10.1109/ICRA.2019.8794282; Innes C., 2020, ARXIV200200784; Kapoor P., 2020, ARXIV201104950; Kim SY, 2020, PHYSIOTHER THEOR PR, V36, P1485, DOI 10.1080/09593985.2019.1566940; Leung K., 2020, ARXIV200800097; Leung K, 2019, IEEE INT VEH SYM, P185, DOI 10.1109/IVS.2019.8814167; Li X., 2019, IEEE INT C INTELL TR, P3960; Ma WC, 2017, PROC CVPR IEEE, P4636, DOI 10.1109/CVPR.2017.493; Makansi O, 2019, PROC CVPR IEEE, P7137, DOI 10.1109/CVPR.2019.00731; Park D., 2020, AUTOPHAGY, P1005; Raman V, 2014, IEEE DECIS CONTR P, P81, DOI 10.1109/CDC.2014.7039363; Sadeghian A, 2019, PROC CVPR IEEE, P1349, DOI 10.1109/CVPR.2019.00144; Salzmann T., 2020, ARXIV200103093; Sandler M, 2018, PROC CVPR IEEE, P4510, DOI 10.1109/CVPR.2018.00474; Shalev-Shwartz S., 2017, ARXIV170806374; Vasile Cristian-Ioan, 2017, 2017 IEEE International Conference on Robotics and Automation (ICRA), P1481, DOI 10.1109/ICRA.2017.7989177; Wang HQ, 2020, IEEE T NEUR NET LEAR, V31, P972, DOI [10.1109/TNNLS.2019.2912082, 10.1109/TKDE.2019.2903810]; Xu Zhe, 2019, IJCAI (U S), V28, P4010, DOI 10.24963/ijcai.2019/557 32 0 0 9 9 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2377-3766 IEEE ROBOT AUTOM LET IEEE Robot. Autom. Lett. APR 2021 6 2 3459 3466 10.1109/LRA.2021.3062807 8 Robotics Robotics RD3PL WOS:000633394300039 2021-09-15 J Alonso-Monsalve, S; Whitehead, LH Alonso-Monsalve, Saul; Whitehead, Leigh H. Image-Based Model Parameter Optimization Using Model-Assisted Generative Adversarial Networks IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS English Article Fast simulation; generative adversarial networks (GANs); model-assisted GAN; parameter optimization We propose and demonstrate the use of a model-assisted generative adversarial network (GAN) to produce fake images that accurately match true images through the variation of the parameters of the model that describes the features of the images. The generator learns the model parameter values that produce fake images that best match the true images. Two case studies show excellent agreement between the generated best match parameters and the true parameters. The best match model parameter values can be used to retune the default simulation to minimize any bias when applying image recognition techniques to fake and true images. In the case of a real-world experiment, the true images are experimental data with unknown true model parameter values, and the fake images are produced by a simulation that takes the model parameters as input. The model-assisted GAN uses a convolutional neural network to emulate the simulation for all parameter values that, when trained, can be used as a conditional generator for fast fake-image production. [Alonso-Monsalve, Saul; Whitehead, Leigh H.] CERN, CH-1211 Geneva, Switzerland; [Alonso-Monsalve, Saul] Univ Carlos III Madrid, Dept Comp Sci & Engn, Leganes 28911, Spain; [Whitehead, Leigh H.] Univ Cambridge, Cavendish Lab, Cambridge CB3 0HE, England Alonso-Monsalve, S (corresponding author), CERN, CH-1211 Geneva, Switzerland. saul.alonso.monsalve@cern.ch Alonso-Monsalve, Saul/0000-0002-9678-7121; Whitehead, Leigh/0000-0002-3327-2534 Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265; Antipov G, 2017, ARXIV170201983; Arjovsky Martin, 2017, PRINCIPLED METHODS T; Bottou L, 2018, SIAM REV, V60, P223, DOI 10.1137/16M1080173; Bromley J., 1993, International Journal of Pattern Recognition and Artificial Intelligence, V7, P669, DOI 10.1142/S0218001493000339; Chen X, 2016, ADV NEUR IN, V29; Chintala S., 2016, How to train a GAN? Tips and tricks to make GANs work; Chollet F., 2015, KERAS; Chopra S, 2005, PROC CVPR IEEE, P539, DOI 10.1109/cvpr.2005.202; Creswell A, 2019, IEEE T NEUR NET LEAR, V30, P1967, DOI 10.1109/TNNLS.2018.2875194; de Oliveira L., 2017, COMPUT SOFTW BIG SCI, V1, P4, DOI [DOI 10.1007/S41781-017-0004-6, 10.1007/s41781-017-0004-6]; Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Hu Z, 2018, ARXIV180609764; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Mirza M., 2014, ARXIV14111784; Paganini M, 2018, PHYS REV LETT, V120, DOI 10.1103/PhysRevLett.120.042003; Paganini M, 2018, PHYS REV D, V97, DOI 10.1103/PhysRevD.97.014021; Radford A., 2015, ARXIV PREPRINT ARXIV; Radovic A, 2018, NATURE, V560, P41, DOI 10.1038/s41586-018-0361-2; Salimans T., 2016, **DROPPED REF**; Schawinski K, 2017, MON NOT R ASTRON SOC, V467, pL110, DOI 10.1093/mnrasl/slx008; Schroff Florian, 2015, ARXIV150303832; Taigman Y, 2014, PROC CVPR IEEE, P1701, DOI 10.1109/CVPR.2014.220; Wu Chenshen, 2018, ADV NEURAL INFORM PR 25 4 4 2 2 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2162-237X 2162-2388 IEEE T NEUR NET LEAR IEEE Trans. Neural Netw. Learn. Syst. DEC 2020 31 12 5645 5650 10.1109/TNNLS.2020.2969327 6 Computer Science, Artificial Intelligence; Computer Science, Hardware & Architecture; Computer Science, Theory & Methods; Engineering, Electrical & Electronic Computer Science; Engineering PA3JF WOS:000595533300049 32167911 Green Submitted, Bronze 2021-09-15 C Rezaei, M; Yang, HJ; Harmuth, K; Meinel, C IEEE Rezaei, Mina; Yang, Haojin; Harmuth, Konstantin; Meinel, Christoph Conditional Generative Adversarial Refinement Networks for Unbalanced Medical Image Semantic Segmentation 2019 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) IEEE Winter Conference on Applications of Computer Vision English Proceedings Paper 19th IEEE Winter Conference on Applications of Computer Vision (WACV) JAN 07-11, 2019 Waikoloa Village, HI IEEE, IEEE Comp Soc, IEEE Biometr Council, AF Res Lab, Amazon, Honeywell, Ancestry, Cognex, Google, Kitware, Percept Automata, SAP, Verisk Analyt, Voxel51, Qualcomm, Wolfram Language CLASSIFICATION We propose a new generative adversarial architecture to mitigate imbalance data problem in medical image semantic segmentation where the majority of pixels belongs to a healthy region and few belong to lesion or non-health region. A model trained with imbalanced data tends to bias towards healthy data which is not desired in clinical applications and predicted outputs by these networks have high precision and low sensitivity. We propose a new conditional generative refinement network with three components: a generative, a discriminative, and a refinement networks to mitigate imbalanced data problem through ensemble learning. The generative network learns to the segment at the pixel level by getting feedback from the discriminative network according to the true positive and true negative maps. On the other hand, the refinement network learns to predict the false positive and the false negative masks produced by the generative network that has significant value, especially in medical application. The final semantic segmentation masks are then composed by the output of the three networks. The proposed architecture shows state-of-the-art results on LiTS-2017 for simultaneous liver and lesion segmentation, and MDA231 for microscopic cell segmentation. We have achieved competitive results on BraTS-2017 for brain tumor segmentation. [Rezaei, Mina; Yang, Haojin; Harmuth, Konstantin; Meinel, Christoph] Hasso Plattner Inst, Potsdam, Germany Rezaei, M (corresponding author), Hasso Plattner Inst, Potsdam, Germany. mina.rezaei@hpi.de; haojn.yang@hpi.de; konstantin.harmuth@hpi.de; christoph.meinel@hpi.de Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265; Akram SU, 2016, LECT NOTES COMPUT SC, V10008, P21, DOI 10.1007/978-3-319-46976-8_3; Arteta C, 2012, LECT NOTES COMPUT SC, V7510, P348, DOI 10.1007/978-3-642-33415-3_43; Bakas S, 2017, CANC IMAGING ARCH; Bakas S, 2017, NATURE SCI DATA; Baksi S, 2017, 2017 IEEE 28TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC), DOI 10.1109/PIMRC.2017.8292212; Bi L., 2017, CORR; Cata M., 2017, MASKED V NET APPROAC, P42; Chollet F., 2015, KERAS; Dong Q., 2018, IEEE T PATTERN ANAL; Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Eaton-Rosen Z., 2017, USING NIFTYNET ENSEM, P61; Graves A, 2005, NEURAL NETWORKS, V18, P602, DOI 10.1016/j.neunet.2005.06.042; Grosges T, 2017, INT MICCAI BRAINL WO, P226; Han X., 2017, CORR; Hashemi Soheil, 2018, CORR; Heimann T, 2009, IEEE T MED IMAGING, V28, P1251, DOI 10.1109/TMI.2009.2013851; Inda MD, 2014, CANCERS, V6, P226, DOI 10.3390/cancers6010226; IOFFE S., 2015, CORR; Isensee F., BRAIN TUMOR SEGMENTA; Isola Phillip, 2016, CORR; Jang JW, 2014, PROCEEDINGS OF THE 2014 9TH INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS (VISAPP), VOL 1, P15; Kamnitsas K, 2018, LECT NOTES COMPUT SC, V10670, P450, DOI 10.1007/978-3-319-75238-9_38; Keramat M, 1998, IEEE T CIRCUITS-II, V45, P575, DOI 10.1109/82.673639; Kim YJ, 2013, HEALTHC INFORM RES, V19, P186, DOI 10.4258/hir.2013.19.3.186; Kohl S., 2017, CORR; Kohli MD, 2017, J DIGIT IMAGING, V30, P392, DOI 10.1007/s10278-017-9976-3; Magnusson KEG, 2012, 2012 9TH IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI), P382, DOI 10.1109/ISBI.2012.6235564; Mariani G., 2018, ARXIV180309655; Menze BH, 2015, IEEE T MED IMAGING, V34, P1993, DOI 10.1109/TMI.2014.2377694; Mirza M., 2014, ARXIV14111784; Moeskops P., 2017, ADVERSARIAL TRAINING, P56; Morales R. R., 2012, ADV IMAGE SEGMENTATI; Nasr G. E., 2002, FLAIRS C, P381; Nyul LG, 2000, IEEE T MED IMAGING, V19, P143, DOI 10.1109/42.836373; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Shidu D., 2017, SEPARATE 3DSEGNET AR, P54; Srebro N, 2005, LECT NOTES COMPUT SC, V3559, P545, DOI 10.1007/11503415_37; Sudre CH, 2017, LECT NOTES COMPUT SC, V10553, P240, DOI 10.1007/978-3-319-67558-9_28; Sun YM, 2007, PATTERN RECOGN, V40, P3358, DOI 10.1016/j.patcog.2007.04.009; Vorontsov E., 2017, CORR; Wang G, 2017, ARXIV170900382; Xue Y., 2017, CORR; Zhao P., 2014, ARXIV14053080 44 5 6 0 4 IEEE NEW YORK 345 E 47TH ST, NEW YORK, NY 10017 USA 2472-6737 978-1-7281-1975-5 IEEE WINT CONF APPL 2019 1836 1845 10.1109/WACV.2019.00200 10 Engineering, Electrical & Electronic Engineering BM8MK WOS:000469423400192 2021-09-15 J Zhu, JX; Meng, LL; Wu, WX; Choi, DM; Ni, JJ Zhu, Jinxiu; Meng, Leilei; Wu, Wenxia; Choi, Dongmin; Ni, Jianjun Generative adversarial network-based atmospheric scattering model for image dehazing DIGITAL COMMUNICATIONS AND NETWORKS English Article Dehazing; Edge computing applications; Atmospheric scattering model; Contrast loss This paper presents a trainable Generative Adversarial Network (GAN)-based end-to-end system for image dehazing, which is named the DehazeGAN. DehazeGAN can be used for edge computing-based applications, such as roadside monitoring. It adopts two networks: one is generator (G), and the other is discriminator (D). The G adopts the U-Net architecture, whose layers are particularly designed to incorporate the atmospheric scattering model of image dehazing. By using a reformulated atmospheric scattering model, the weights of the generator network are initialized by the coarse transmission map, and the biases are adaptively adjusted by using the previous round's trained weights. Since the details may be blurry after the fog is removed, the contrast loss is added to enhance the visibility actively. Aside from the typical GAN adversarial loss, the pixel-wise Mean Square Error (MSE) loss, the contrast loss and the dark channel loss are introduced into the generator loss function. Extensive experiments on benchmark images, the results of which are compared with those of several state-of-the-art methods, demonstrate that the proposed DehazeGAN performs better and is more effective. [Zhu, Jinxiu; Meng, Leilei; Wu, Wenxia; Ni, Jianjun] Hohai Univ, Coll Internet Things Engn, Changzhou 213022, Jiangsu, Peoples R China; [Zhu, Jinxiu; Ni, Jianjun] Jiangsu Prov Collaborat Innovat Ctr World Water V, Nanjing 211100, Jiangsu, Peoples R China; [Choi, Dongmin] Chosun Univ, Gwangju 61452, South Korea Choi, DM (corresponding author), Chosun Univ, Gwangju 61452, South Korea. zhujinxiu1972@163.com; 18360821591@163.com; wuwenx1995@163.com; jdmcc@chosun.ac.kr; 20051711@hhu.edu.cn Basic Science Research Program through the National Research Foundation of Korea (NRF) - Ministry of Education [NRF-2018R1D1A1B07043331] This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number NRF-2018R1D1A1B07043331). Cai BL, 2016, IEEE T IMAGE PROCESS, V25, P5187, DOI 10.1109/TIP.2016.2598681; Chen SJ, 2019, NEUROCOMPUTING, V358, P275, DOI 10.1016/j.neucom.2019.05.046; Chen ZH, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P2626, DOI 10.1109/ICASSP.2018.8462078; [范新南 Fan Xinnan], 2019, [计算机辅助设计与图形学学报, Journal of Computer-Aided Design & Computer Graphics], V31, P1148; Fattal R, 2008, ACM T GRAPHIC, V27, DOI 10.1145/1360612.1360671; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Guo, 2017, ANN C MED IM UND AN, P506, DOI [10.1007/978-3-319-60964-5_44, DOI 10.1007/978-3-319-60964-5_44]; Guo F, 2012, RES IMAGE DEFOGGING; Hautiere Nicolas, 2008, Image Analysis & Stereology, V27, P87; He K., P IEEE C COMPUTER VI, P770; He KM, 2011, IEEE T PATTERN ANAL, V33, P2341, DOI 10.1109/TPAMI.2010.168; Huang KQ, 2006, COMPUT VIS IMAGE UND, V103, P52, DOI 10.1016/j.cviu.2006.02.007; Ketkar N, 2017, STOCHASTIC GRADIENT; Lee D, 2019, IEEE ACCESS, V7, P110344, DOI 10.1109/ACCESS.2019.2934320; Li BY, 2019, IEEE T IMAGE PROCESS, V28, P492, DOI 10.1109/TIP.2018.2867951; Li BY, 2017, IEEE I CONF COMP VIS, P4780, DOI 10.1109/ICCV.2017.511; Li J, 2018, IEEE ROBOT AUTOM LET, V3, P387, DOI 10.1109/LRA.2017.2730363; Malav R., 2018, 14 AS C COMP VIS ACC, P593; McCartney E J, 1976, OPTICS ATMOSPHERE SC; Mithun NC, 2016, EXPERT SYST APPL, V62, P17, DOI 10.1016/j.eswa.2016.06.020; Murugadoss R, 2014, IEEE I C COMP INT CO, P1062; Nayar S. K., 1999, Proceedings of the Seventh IEEE International Conference on Computer Vision, P820, DOI 10.1109/ICCV.1999.790306; Negru M, 2015, IEEE T INTELL TRANSP, V16, P2257, DOI 10.1109/TITS.2015.2405013; Park H, 2014, IEEE IMAGE PROC, P4502, DOI 10.1109/ICIP.2014.7025913; Ren WQ, 2016, LECT NOTES COMPUT SC, V9906, P154, DOI 10.1007/978-3-319-46475-6_10; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Scharf D, 2014, P IEEE AER C MAR, P1, DOI [10.1109/AERO.2014.6836462, DOI 10.1109/AERO.2014.6836462]; Su X, 2019, IEEE T IND INFORM, V15, P5765, DOI 10.1109/TII.2019.2912175; Su X, 2016, EURASIP J WIREL COMM, DOI 10.1186/s13638-016-0732-z; Tan RT, 2008, PROC CVPR IEEE, P2347, DOI 10.1109/cvpr.2008.4587643; Tian B, 2014, IEEE T INTELL TRANSP, V15, P597, DOI 10.1109/TITS.2013.2283302; Wang AN, 2019, IEEE T IMAGE PROCESS, V28, P381, DOI 10.1109/TIP.2018.2868567; Wang JB, 2015, NEUROCOMPUTING, V149, P718, DOI 10.1016/j.neucom.2014.08.005; Wei ZS, 2019, KSII T INTERNET INF, V13, P3942, DOI 10.3837/tiis.2019.08.007; Wu FF, 2018, LECT NOTES COMPUT SC, V11164, P877, DOI 10.1007/978-3-030-00776-8_80; Wu Wenxia, 2019, CONCURRENCY COMPUT P, P1; Zeiler MD, 2014, LECT NOTES COMPUT SC, V8689, P818, DOI 10.1007/978-3-319-10590-1_53; Zhang YD, 2018, MULTIMED TOOLS APPL, V77, P21825, DOI 10.1007/s11042-017-4383-9; Zhao JM, 2019, ENG APPL ARTIF INTEL, V82, P263, DOI 10.1016/j.engappai.2019.04.003 39 0 0 2 2 KEAI PUBLISHING LTD BEIJING 16 DONGHUANGCHENGGEN NORTH ST, BEIJING, DONGHENG DISTRICT 100717, PEOPLES R CHINA 2468-5925 2352-8648 DIGIT COMMUN NETW Digit. Commun. Netw. MAY 2021 7 2 178 186 10.1016/j.dcan.2020.08.003 9 Telecommunications Telecommunications SQ3XI WOS:000660289300003 gold 2021-09-15 C Anand, A; Gorde, K; Moniz, JRA; Park, N; Chakraborty, T; Chu, BT Abe, N; Liu, H; Pu, C; Hu, X; Ahmed, N; Qiao, M; Song, Y; Kossmann, D; Liu, B; Lee, K; Tang, J; He, J; Saltz, J Anand, Ankesh; Gorde, Kshitij; Moniz, Joel Ruben Antony; Park, Noseong; Chakraborty, Tanmoy; Chu, Bei-Tseng Phishing URL Detection with Oversampling based on Text Generative Adversarial Networks 2018 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA) IEEE International Conference on Big Data English Proceedings Paper IEEE International Conference on Big Data (Big Data) DEC 10-13, 2018 Seattle, WA IEEE, IEEE Comp Soc, Expedia Grp, Baidu, Squirrel AI Learning, Ankura, Springer phishing; text-GANs; generative adversarial networks; oversampling The problem of imbalanced classes arises frequently in binary classification tasks. If one class outnumbers another, trained classifiers become heavily biased towards the majority class. For phishing URL detection, it is very natural that the number of collected benign URLs (i.e., the majority class) is much larger than the number of collected phishy URLs (i.e., the minority class). Oversampling the minority class can be a powerful tool to overcome this situation. However, existing methods perform the oversampling task in the feature space where the original data format is removed and URLs are succinctly represented by vectors. These methods are successful only if feature definitions are correct and the dataset is diverse and not too sparse. In this paper, we propose an oversampling technique in the data space. We train text generative adversarial networks (text-GANs) with URLs in the minority class and generate synthetic URLs that can be made part of the training set. We crawl a crowd-sourced URL repository to collect recently discovered phishy and benign URLs. Our experiments demonstrate significant performance improvements after using the proposed oversampling technique. Interestingly, some of the original test URLs are exactly regenerated by the proposed text generative model. [Anand, Ankesh] Montreal Inst Learning Algorithms, Montreal, PQ, Canada; [Gorde, Kshitij; Chu, Bei-Tseng] Univ N Carolina, Charlotte, NC USA; [Moniz, Joel Ruben Antony] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA; [Park, Noseong] George Mason Univ, Fairfax, VA 22030 USA; [Chakraborty, Tanmoy] IIIT Delhi, New Delhi, India Park, N (corresponding author), George Mason Univ, Fairfax, VA 22030 USA. ankesh.anand@umontreal.ca; kgorde@uncc.edu; jrmoniz@andrew.cmu.edu; npark9@gmu.edu; tanmoy@iiitd.ac.in; billchu@uncc.edu Park, Noseong/ABG-3935-2020 Office of Naval Research under the MURI grant [N00014-18-1-2670]; Indo-UK Collaborative Project [DST/INT/UKP158/2017] This work was partially supported by the Office of Naval Research under the MURI grant N00014-18-1-2670, and the Indo-UK Collaborative Project DST/INT/UKP158/2017. Arjovsky M, 2017, CORR; Arjovsky M., 2017, ARXIV E PRINTS; Bahnsen A. C., 2017, APWG S EL CRIM RES E; Blum A., 2010, P 3 ACM WORKSH ART I; Bowyer K. W., 2011, CORR; Darling M, 2015, PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPUTING & SIMULATION (HPCS 2015), P195, DOI 10.1109/HPCSim.2015.7237040; Donoho D. L., 2003, P NATL ACAD SCI, V100; Feroz MN, 2015, IEEE INT CONGR BIG, P635, DOI 10.1109/BigDataCongress.2015.97; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I, 2017, ARXIV170400028; Han H, 2005, P INT C ADV INT COMP; He HB, 2008, IEEE IJCNN, P1322, DOI 10.1109/IJCNN.2008.4633969; Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI 10.1162/neco.1997.9.8.1735; Jang E., 2016, ARXIV E PRINTS; Ma J., 2009, P SIGKDD C PAR FRANC; Ma Justin, 2009, P 26 ANN INT C MACH P 26 ANN INT C MACH; Mohammad R. M., 2014, NEURAL COMPUTING APP, V25; Mohammad RM, 2012, INT CONF INTERNET, P492; Nguyen Hien M., 2011, International Journal of Knowledge Engineering and Soft Data Paradigms, V3, P4, DOI 10.1504/IJKESDP.2011.039875; ROUSSEEUW PJ, 1987, J COMPUT APPL MATH, V20, P53, DOI 10.1016/0377-0427(87)90125-7; Sorio Enrico, 2013, 2013 International Conference on Availability, Reliability and Security (ARES), P242, DOI 10.1109/ARES.2013.31; Verma Rakesh, 2015, P 5 ACM C DAT APPL S; Whittaker C., 2010, NDSS 10; Yu LT, 2017, THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2852 24 6 6 0 0 IEEE NEW YORK 345 E 47TH ST, NEW YORK, NY 10017 USA 2639-1589 978-1-5386-5035-6 IEEE INT CONF BIG DA 2018 1168 1177 10 Computer Science, Artificial Intelligence; Computer Science, Information Systems; Computer Science, Theory & Methods Computer Science BM7WO WOS:000468499301033 2021-09-15 J Olmschenk, G; Zhu, ZG; Tang, H Olmschenk, Greg; Zhu, Zhigang; Tang, Hao Generalizing semi-supervised generative adversarial networks to regression using feature contrasting COMPUTER VISION AND IMAGE UNDERSTANDING English Article Generative adversarial learning; Age estimation; Regression In this work, we generalize semi-supervised generative adversarial networks (GANs) from classification problems to regression problems. In the last few years, the importance of improving the training of neural networks using semi-supervised training has been demonstrated for classification problems. We present a novel loss function, called feature contrasting, resulting in a discriminator which can distinguish between fake and real data based on feature statistics. This method avoids potential biases and limitations of alternative approaches. The generalization of semi-supervised GANs to the regime of regression problems of opens their use to countless applications as well as providing an avenue for a deeper understanding of how GANs function. We first demonstrate the capabilities of semi-supervised regression GANs on a toy dataset which allows for a detailed understanding of how they operate in various circumstances. This toy dataset is used to provide a theoretical basis of the semi-supervised regression GAN. We then apply the semi-supervised regression GANs to a number of real-world computer vision applications: age estimation, driving steering angle prediction, and crowd counting from single images. We perform extensive tests of what accuracy can be achieved with significantly reduced annotated data. Through the combination of the theoretical example and real-world scenarios, we demonstrate how semi-supervised GANs can be generalized to regression problems. [Olmschenk, Greg; Zhu, Zhigang] CUNY City Coll, 160 Convent Ave, New York, NY 10031 USA; [Olmschenk, Greg; Zhu, Zhigang] CUNY, Grad Ctr, 365 5th Ave, New York, NY 10016 USA; [Tang, Hao] CUNY, Borough Manhattan Community Coll, 199 Chambers St, New York, NY 10007 USA Olmschenk, G (corresponding author), CUNY, Grad Ctr, 365 5th Ave, New York, NY 10016 USA. golmschenk@gradcenter.cuny.edu DOEUnited States Department of Energy (DOE) [1DE-AC05060R23100, 1DE-SC0014664]; National Science FoundationNational Science Foundation (NSF) [1827505, 1737533]; Bentley Systems, Incorporated, through a CUNY-Bentley Collaborative Research Agreement (CRA); Defense Intelligence Agency (DIA) via the Rutgers University Consortium for Critical Technology Studies This research was initiated under appointments to the U.S. Department of Homeland Security (DHS) Science & Technology Directorate Office of University Programs, administered by the Oak Ridge Institute for Science and Education (ORISE) through an interagency agreement between the U.S. Department of Energy (DOE) and DHS. ORISE is managed by ORAU under DOE contract number 1DE-AC05060R23100 and 1DE-SC0014664. All opinions expressed in this paper are the author's and do not necessarily reflect the policies and views of DHS, DOE, or ORAU/ORISE. The research is also supported by National Science Foundation through Awards PFI #1827505 and SCCPlanning #1737533, and Bentley Systems, Incorporated, through a CUNY-Bentley Collaborative Research Agreement (CRA). Additional support provided by the Defense Intelligence Agency (DIA) via the Rutgers University Consortium for Critical Technology Studies. Ali I, 2015, REMOTE SENS-BASEL, V7, P16398, DOI 10.3390/rs71215841; Barnett S. A., 2018, ARXIV180611382; Bazrafkan S., 2018, ARXIV180510864; Bland LM, 2015, CONSERV BIOL, V29, P250, DOI 10.1111/cobi.12372; Bottou L., 2017, ARXIV170107875; Chen S., 2017, SULLY CHEN DRIVING D; Dai Zihang, 2017, ADV NEURAL INFORM PR, V30, P6510; Ding X, 2015, PROCEEDINGS OF THE TWENTY-FOURTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE (IJCAI), P2327; Dodge Samuel, 2017, ARXIV170502498; Eigen D., 2014, ADV NEURAL INFORM PR, DOI DOI 10.1007/978-3-540-28650-9_5; Elsayed G. F., 2018, ADV NEURAL INFORM PR; Fabbro S., 2017, MON NOT R ASTRON SOC; Fefferman C, 2016, J AM MATH SOC, V29, P983, DOI 10.1090/jams/852; Goodfellow I., 2016, NIPS 2016 TUTORIAL G; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I, 2017, IMPROVED TRAINING WA, DOI DOI 10.5555/3295222.3295327; Hartikainen J., 2012, ARXIV12064670; Huang G, 2017, PROC CVPR IEEE, P2261, DOI 10.1109/CVPR.2017.243; Idrees H., 2018, ARXIV180801050; LeCun Y, 1999, LECT NOTES COMPUT SC, V1681, P319, DOI 10.1007/3-540-46805-6_19; LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539; Liu YG, 2005, J GEOPHYS RES-OCEANS, V110, DOI 10.1029/2004JC002786; Lv YS, 2015, IEEE T INTELL TRANSP, V16, P865, DOI 10.1109/TITS.2014.2345663; Marino DL, 2016, IEEE IND ELEC, P7046, DOI 10.1109/IECON.2016.7793413; NIU ZX, 2016, PROC CVPR IEEE, P4920, DOI DOI 10.1109/CVPR.2016.532; Oliveira T.P., 2016, INT J BIG DATA INTEL, V3, P28, DOI [10.1504/IJBDI.2016.073903, DOI 10.1504/IJBDI.2016.073903]; Pan X., 2017, ARXIV170403952 ARXIV170403952; Pathak D, 2016, PROC CVPR IEEE, P2536, DOI 10.1109/CVPR.2016.278; Radford A., 2015, COMPUTER ENCE; Rezagholizadeh M, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P2806, DOI 10.1109/ICASSP.2018.8462534; Rothe R, 2018, INT J COMPUT VISION, V126, P144, DOI 10.1007/s11263-016-0940-3; Salimans T, 2016, ADV NEURAL INFORM PR, P2234, DOI DOI 10.5555/3157096.3157346; Schwarz M, 2015, IEEE INT CONF ROBOT, P1329, DOI 10.1109/ICRA.2015.7139363; Souly Nasim, 2017, ARXIV170309695; Springenberg J.T., 2015, ARXIV151106390; Sricharan K., 2017, ARXIV170805789; Xingjian S., 2015, ADV NEURAL INFORM PR, P802; Zhang C, 2015, PROC CVPR IEEE, P833, DOI 10.1109/CVPR.2015.7298684; Zhang YY, 2016, PROC CVPR IEEE, P589, DOI 10.1109/CVPR.2016.70 39 3 3 2 14 ACADEMIC PRESS INC ELSEVIER SCIENCE SAN DIEGO 525 B ST, STE 1900, SAN DIEGO, CA 92101-4495 USA 1077-3142 1090-235X COMPUT VIS IMAGE UND Comput. Vis. Image Underst. SEP 2019 186 1 12 10.1016/j.cviu.2019.06.004 12 Computer Science, Artificial Intelligence; Engineering, Electrical & Electronic Computer Science; Engineering IR6QU WOS:000481564600001 Green Submitted 2021-09-15 J Tschuchnig, ME; Oostingh, GJ; Gadermayr, M Tschuchnig, Maximilian E.; Oostingh, Gertie J.; Gadermayr, Michael Generative Adversarial Networks in Digital Pathology: A Survey on Trends and Future Potential PATTERNS English Review SEGMENTATION; CANCER; IMAGES; CLASSIFICATION; SCALE Image analysis in the field of digital pathology has recently gained increased popularity. The use of high-quality whole-slide scanners enables the fast acquisition of large amounts of image data, showing extensive context and microscopic detail at the same time. Simultaneously, novel machine-learning algorithms have boosted the performance of image analysis approaches. In this paper, we focus on a particularly powerful class of architectures, the so-called generative adversarial networks (GANs) applied to histological image data. Besides improving performance, GANs also enable previously intractable application scenarios in this field. However, GANs could exhibit a potential for introducing bias. Hereby, we summarize the recent state-of-the-art developments in a generalizing notation, present the main applications of GANs, and give an outlook of some chosen promising approaches and their possible future applications. In addition, we identify currently unavailable methods with potential for future applications. [Tschuchnig, Maximilian E.; Gadermayr, Michael] Salzburg Univ Appl Sci, Dept Informat Technol & Syst Management, A-5412 Puch Bei Hallein, Austria; [Tschuchnig, Maximilian E.; Oostingh, Gertie J.] Salzburg Univ Appl Sci, Dept Biomed Sci, A-5412 Puch Bei Hallein, Austria Tschuchnig, ME (corresponding author), Salzburg Univ Appl Sci, Dept Informat Technol & Syst Management, A-5412 Puch Bei Hallein, Austria.; Tschuchnig, ME (corresponding author), Salzburg Univ Appl Sci, Dept Biomed Sci, A-5412 Puch Bei Hallein, Austria. maximilian.tschuchnig@fh-salzbury.ac.at , Michael/0000-0003-1450-9222; tschuchnig, Maximilian Ernst/0000-0002-1441-4752 County of Salzburg [FHS-2019-10-KIAMed] This work was partially funded by the County of Salzburg under grant number FHS-2019-10-KIAMed. Abdolhoseini M, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-38813-2; Almahairi A., 2018, P INT C MACH LEARN I; Arjovsky M., 2017, 170107875 ARXIV; BenTaieb Aicha, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P460, DOI 10.1007/978-3-319-46723-8_53; BenTaieb A, 2018, IEEE T MED IMAGING, V37, P792, DOI 10.1109/TMI.2017.2781228; Bug D., 2019, P 2 MICCAI WORKSH CO; Burlingame EA, 2018, PROC SPIE, V10581, DOI 10.1117/12.2293249; Chen X., 2016, P C ADV NEUR INF PRO; de Bel T., 2019, P 2 INT C MED IM DEE, P151; Dimitriou N, 2019, FRONT MED-LAUSANNE, V6, DOI 10.3389/fmed.2019.00264; Gadermayr M., 2019, P 2 INT C MED IM DEE; Gadermayr M., 2020, ABS200411001 CORR, P11001; Gadermayr M, 2019, IEEE T MED IMAGING, V38, P2293, DOI 10.1109/TMI.2019.2899364; Gadermayr M, 2019, COMPUT MED IMAG GRAP, V71, P40, DOI 10.1016/j.compmedimag.2018.11.002; Gadermayr M, 2018, LECT NOTES COMPUT SC, V11071, P165, DOI 10.1007/978-3-030-00934-2_19; Gadermayr M, 2016, LECT NOTES COMPUT SC, V9730, P616, DOI 10.1007/978-3-319-41501-7_69; Gecer B, 2018, PATTERN RECOGN, V84, P345, DOI 10.1016/j.patcog.2018.07.022; Ghorbani A., 2019, 191108716 ARXIV; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gupta A., 2019, 190202248 ARXIV; Gupta L., 2019, P INT C MED IM COMP; Hou L., 2019, P IEEE C COMP VIS PA; Hou L., 2016, P INT C COMP VIS CVP; Hou L., 2017, 171205021 ARXIV; Hu B, 2019, IEEE J BIOMED HEALTH, V23, P1316, DOI 10.1109/JBHI.2018.2852639; Huo Y., 2017, ABS171207695 CORR; Jaderberg M, 2015, ADV NEUR IN, V28; Jiang J, 2018, LECT NOTES COMPUT SC, V11071, P777, DOI 10.1007/978-3-030-00934-2_86; Jung C, 2010, IEEE T BIO-MED ENG, V57, P2825, DOI 10.1109/TBME.2010.2060486; Karras T., 2018, P INT C LEARN REPR I; Kearney V, 2020, RADIOL ARTIF INTELL, V2, DOI DOI 10.1148/RYAI.2020190027; Khan AM, 2014, IEEE T BIO-MED ENG, V61, P1729, DOI 10.1109/TBME.2014.2303294; Koch G., 2015, P ICML WORKSH DEEP L, V2; Kooi T, 2017, MED IMAGE ANAL, V35, P303, DOI 10.1016/j.media.2016.07.007; Lahiani A, 2019, LECT NOTES COMPUT SC, V11764, P568, DOI 10.1007/978-3-030-32239-7_63; Lecouat B., 2018, 181207832 ARXIV; Lei Y, 2019, MED PHYS, V46, P3565, DOI 10.1002/mp.13617; Levine AB, 2020, J PATHOL, V252, P178, DOI 10.1002/path.5509; Litjens G, 2017, MED IMAGE ANAL, V42, P60, DOI 10.1016/j.media.2017.07.005; Macenko M, 2009, 2009 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: FROM NANO TO MACRO, VOLS 1 AND 2, P1107, DOI 10.1109/ISBI.2009.5193250; Mahmood F, 2020, IEEE T MED IMAGING, V39, P3257, DOI 10.1109/TMI.2019.2927182; Marek K., 2019, J DIGIT IMAGING, V33, P231; Metter DM, 2019, JAMA NETW OPEN, V2, DOI 10.1001/jamanetworkopen.2019.4337; Mirza M., 2014, ARXIV14111784; Modanwal G, 2020, PROC SPIE, V11314, DOI 10.1117/12.2551301; Mosaliganti K, 2008, IEEE T VIS COMPUT GR, V14, P863, DOI 10.1109/TVCG.2008.30; Ojala T, 2002, IEEE T PATTERN ANAL, V24, P971, DOI 10.1109/TPAMI.2002.1017623; Persson J, 2014, SCAND J UROL, V48, P160, DOI 10.3109/21681805.2013.820788; Petriceks AH, 2018, ACAD PATHOL, V5, DOI 10.1177/2374289518765457; Quiros A.C., 2019, 190702644 ARXIV; Rana A, 2018, 2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA), P828, DOI 10.1109/ICMLA.2018.00133; Reinhard E, 2001, IEEE COMPUT GRAPH, V21, P34, DOI 10.1109/38.946629; Ren J, 2018, LECT NOTES COMPUT SC, V11071, P201, DOI 10.1007/978-3-030-00934-2_23; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Salimans T, 2016, ADV NEURAL INFORM PR, P2234, DOI DOI 10.5555/3157096.3157346; Sanchez J, 2013, INT J COMPUT VISION, V105, P222, DOI 10.1007/s11263-013-0636-x; Senaras C, 2018, PLOS ONE, V13, DOI 10.1371/journal.pone.0196846; Shaban MT, 2019, I S BIOMED IMAGING, P953, DOI 10.1109/ISBI.2019.8759152; Shrivastava A, 2016, PROC CVPR IEEE, P761, DOI 10.1109/CVPR.2016.89; Sirinukunwattana K, 2017, MED IMAGE ANAL, V35, P489, DOI 10.1016/j.media.2016.08.008; Tellez D, 2019, MED IMAGE ANAL, V58, DOI 10.1016/j.media.2019.101544; Tsuda H, 2000, JPN J CANCER RES, V91, P451, DOI 10.1111/j.1349-7006.2000.tb00966.x; Wang D., 2017, 2017 INT C MACH LEAR; Wei J., 2019, P NEURIPS WORKSH MAC; Wolterink Jelmer M., 2017, Simulation and Synthesis in Medical Imaging. Second International Workshop, SASHIMI 2017. Held in Conjunction with MICCAI 2017. Proceedings: LNCS 10557, P14, DOI 10.1007/978-3-319-68127-6_2; Xu Y, 2014, MED IMAGE ANAL, V18, P591, DOI 10.1016/j.media.2014.01.010; Xu Z., 2019, ABS190104059 ARXIV; Yang JM, 2016, PROC CVPR IEEE, P193, DOI 10.1109/CVPR.2016.28; Zanjani FG, 2018, I S BIOMED IMAGING, P573, DOI 10.1109/ISBI.2018.8363641; Zhao A, 2019, PROC CVPR IEEE, P8535, DOI 10.1109/CVPR.2019.00874; Zhou NY, 2019, LECT NOTES COMPUT SC, V11764, P694, DOI 10.1007/978-3-030-32239-7_77; Zhu J.-Y., 2017, P INT C COMP VIS ICC 72 7 7 1 1 ELSEVIER AMSTERDAM RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS 2666-3899 PATTERNS Patterns SEP 11 2020 1 6 100089 10.1016/j.patter.2020.100089 11 Computer Science, Artificial Intelligence; Computer Science, Information Systems; Computer Science, Interdisciplinary Applications Computer Science SH0LO WOS:000653829700007 33205132 gold, Green Submitted, Green Published 2021-09-15 J Wu, B; Liu, L; Yang, YQ; Zheng, KF; Wang, XJ Wu, Bin; Liu, Le; Yang, Yanqing; Zheng, Kangfeng; Wang, Xiujuan Using Improved Conditional Generative Adversarial Networks to Detect Social Bots on Twitter IEEE ACCESS English Article Social bot detection; conditional generative adversarial networks; data augmentation; supervised classification; imbalanced data IMBALANCED DATA; CLASSIFICATION; SMOTE; RULES The detection and removal of malicious social bots in social networks has become an area of interest in industry and academia. The widely used bot detection method based on machine learning leads to an imbalance in the number of samples in different categories. Classifier bias leads to a low detection rate of minority samples. Therefore, we propose an improved conditional generative adversarial network (improved CGAN) to extend imbalanced data sets before applying training classifiers to improve the detection accuracy of social bots. To generate an auxiliary condition, we propose a modified clustering algorithm, namely, the Gaussian kernel density peak clustering algorithm (GKDPCA), which avoids the generation of data-augmentation noise and eliminates imbalances between and within social bot class distributions. Furthermore, we improve the CGAN convergence judgment condition by introducing the Wasserstein distance with a gradient penalty, which addresses the model collapse and gradient disappearance in the traditional CGAN. Three common oversampling algorithms are compared in experiments. The effects of the imbalance degree and the expansion ratio of the original data on oversampling are studied, and the improved CGAN performs better than the others. Experimental results comparing with three common oversampling algorithms show that the improved CGAN achieves the higher evaluation scores in terms of F1-score, G-mean and AUC. [Wu, Bin; Liu, Le; Yang, Yanqing; Zheng, Kangfeng] Beijing Univ Posts & Telecommun, Sch Cyberspace Secur, Beijing 100876, Peoples R China; [Wang, Xiujuan] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China Wu, B (corresponding author), Beijing Univ Posts & Telecommun, Sch Cyberspace Secur, Beijing 100876, Peoples R China. binwu@bupt.edu.cn yang, yanqing/C-1542-2019 yang, yanqing/0000-0001-9993-7757; Wu, Bin/0000-0003-0657-3427 National Key Research and Development Program of China [2017YFB0802703]; Beijing Natural Science FoundationBeijing Natural Science Foundation [4202002] This work was supported in part by the National Key Research and Development Program of China under Grant 2017YFB0802703, and in part by the Beijing Natural Science Foundation under Grant 4202002. Abokhodair N, 2015, PROCEEDINGS OF THE 2015 ACM INTERNATIONAL CONFERENCE ON COMPUTER-SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING (CSCW'15), P839, DOI 10.1145/2675133.2675208; Alothali E, 2018, IEEE INT CONF INNOV, P175, DOI 10.1109/INNOVATIONS.2018.8605995; Arnold A, 2007, KDD-2007 PROCEEDINGS OF THE THIRTEENTH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, P66; Aslam S., 2019, TWITTER NUMBERS STAT; Awasare V. K., 2017, 2 INT C EL COMP COMM, P1; Barstugan M., 2017, P INT ART INT DAT PR, P1; Barua S, 2014, IEEE T KNOWL DATA EN, V26, P405, DOI 10.1109/TKDE.2012.232; Benchaji I., 2018, 2018 2 CYB SEC NETW, P1; Bhandari S, 2019, LIT VOICE, V1, P36; Buda M, 2018, NEURAL NETWORKS, V106, P249, DOI 10.1016/j.neunet.2018.07.011; Bunkhumpornpat C, 2009, LECT NOTES ARTIF INT, V5476, P475, DOI 10.1007/978-3-642-01307-2_43; Cai CY, 2017, 2017 IEEE INTERNATIONAL CONFERENCE ON INTELLIGENCE AND SECURITY INFORMATICS (ISI), P128, DOI 10.1109/ISI.2017.8004887; Cao Q, 2012, 9 USENIX S NETW SYST; Cassa C., 2013, PLOS CURRENTS; Charalambous C., 2016, P BRIT MACH VIS C; Chavoshi N, 2016, IEEE DATA MINING, P817, DOI [10.1109/ICDM.2016.86, 10.1109/ICDM.2016.0096]; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Chawla NV, 2003, LECT NOTES ARTIF INT, V2838, P107, DOI 10.1007/978-3-540-39804-2_12; Cheng H, 2019, IEEE ACCESS, V7, P29989, DOI 10.1109/ACCESS.2019.2897799; Chu Z, 2010, 26TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2010), P21; Clark EM, 2016, J COMPUT SCI-NETH, V16, P1, DOI 10.1016/j.jocs.2015.11.002; Clement J., 2019, TWITTER NUMBER MONTH; Clement J., 2019, FACEBOOK NUMBER MONT; Conover M., 2011, NETWORKS, DOI DOI 10.1021/JA202932E; Cresci S, 2018, IEEE T DEPEND SECURE, V15, P561, DOI 10.1109/TDSC.2017.2681672; Dhar S, 2015, IEEE T CYBERNETICS, V45, P806, DOI 10.1109/TCYB.2014.2336876; Doroshenko A, 2018, 2018 IEEE SECOND INTERNATIONAL CONFERENCE ON DATA STREAM MINING & PROCESSING (DSMP), P231, DOI 10.1109/DSMP.2018.8478537; Douzas G, 2018, INFORM SCIENCES, V465, P1, DOI 10.1016/j.ins.2018.06.056; Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Faloutsos, 2013, P 22 INT C WORLD WID, P119, DOI DOI 10.1145/2488388.2488400; Ferrara E, 2016, COMMUN ACM, V59, P96, DOI 10.1145/2818717; Fiore U, 2019, INFORM SCIENCES, V479, P448, DOI 10.1016/j.ins.2017.12.030; Gilani Z., 2017, P 2017 IEEE ACM INT, P349, DOI DOI 10.1145/3110025.3110090; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; Guo HX, 2017, EXPERT SYST APPL, V73, P220, DOI 10.1016/j.eswa.2016.12.035; Han H, 2005, LECT NOTES COMPUT SC, V3644, P878, DOI 10.1007/11538059_91; HART PE, 1968, IEEE T INFORM THEORY, V14, P515, DOI 10.1109/TIT.1968.1054155; He HB, 2008, IEEE IJCNN, P1322, DOI 10.1109/IJCNN.2008.4633969; Jiang K, 2016, ARAB J SCI ENG, V41, P3255, DOI 10.1007/s13369-016-2179-2; Joshi MV, 2001, 2001 IEEE INTERNATIONAL CONFERENCE ON DATA MINING, PROCEEDINGS, P257, DOI 10.1109/ICDM.2001.989527; Khan SH, 2018, IEEE T NEUR NET LEAR, V29, P3573, DOI 10.1109/TNNLS.2017.2732482; Kubat M., 1997, ICML, V97, P179; Kudugunta S, 2018, INFORM SCIENCES, V467, P312, DOI 10.1016/j.ins.2018.08.019; Landauer TK, 1998, DISCOURSE PROCESS, V25, P259, DOI 10.1080/01638539809545028; Laurikkala J, 2001, LECT NOTES ARTIF INT, V2101, P63, DOI 10.1007/3-540-48229-6_9; Lemley J, 2017, IEEE ACCESS, V5, P5858, DOI 10.1109/ACCESS.2017.2696121; Li LX, 2018, CHAOS SOLITON FRACT, V110, P33, DOI 10.1016/j.chaos.2018.03.010; Lingam G, 2018, INT CONF IND INF SYS, P280, DOI 10.1109/ICIINFS.2018.8721318; Longadge R., 2013, INT J COMPUT SCI NET, V2, P83, DOI DOI 10.1109/SIU.2013.6531574; Lopez V, 2013, INFORM SCIENCES, V250, P113, DOI 10.1016/j.ins.2013.07.007; Loyola-Gonzalez O, 2019, IEEE ACCESS, V7, P45800, DOI 10.1109/ACCESS.2019.2904220; Ma L, 2017, BMC BIOINFORMATICS, V18, DOI 10.1186/s12859-017-1578-z; Mathew J, 2018, IEEE T NEUR NET LEAR, V29, P4065, DOI 10.1109/TNNLS.2017.2751612; Mirza M., 2014, ARXIV14111784; Palacios A, 2014, INT J UNCERTAIN FUZZ, V22, P643, DOI 10.1142/S0218488514500330; Pedregosa F, 2011, J MACH LEARN RES, V12, P2825; Prusty MR, 2017, PROG NUCL ENERG, V100, P355, DOI 10.1016/j.pnucene.2017.07.015; Qiu C, 2017, APPL SOFT COMPUT, V53, P27, DOI 10.1016/j.asoc.2016.12.047; Shi PN, 2019, IEEE ACCESS, V7, P28855, DOI 10.1109/ACCESS.2019.2901864; Subrahmanian VS, 2016, COMPUTER, V49, P38, DOI 10.1109/MC.2016.183; Sun YM, 2009, INT J PATTERN RECOGN, V23, P687, DOI 10.1142/S0218001409007326; Sun ZB, 2015, PATTERN RECOGN, V48, P1623, DOI 10.1016/j.patcog.2014.11.014; Tallo T. E., 2018, P 4 INT C SCI TECHN P 4 INT C SCI TECHN, V1, P1; Thabtah F, 2020, INFORM SCIENCES, V513, P429, DOI 10.1016/j.ins.2019.11.004; TOMEK I, 1976, IEEE T SYST MAN CYB, V6, P769, DOI 10.1109/tsmc.1976.4309452; Van Der Walt E, 2018, IEEE ACCESS, V6, P6540, DOI 10.1109/ACCESS.2018.2796018; Varol O., 2017, P 11 INT AAAI C WEB; Wang G., 2013, P 22 USENIX C SEC SE, P241; WILSON DL, 1972, IEEE T SYST MAN CYB, VSMC2, P408, DOI 10.1109/TSMC.1972.4309137; Xu X., 2012, J SYST ENG ELECTRON, V30, P1182; Yang YQ, 2019, APPL SCI-BASEL, V9, DOI 10.3390/app9020238; Yang Z, 2011, P 2011 ACM SIGCOMM C; Zafarani Reza, 2015, P 24 ACM INT C INF K, P423, DOI DOI 10.1145/2806416.2806535; Zhang C, 2017, PROCEEDINGS OF THE 2017 INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY (ICIT 2017), P17, DOI [10.1145/3176653.3176676, 10.2298/PAN150402034Z]; Zhu TF, 2019, KNOWL-BASED SYST, V166, P140, DOI 10.1016/j.knosys.2018.12.021; Zhu TF, 2017, PATTERN RECOGN, V72, P327, DOI 10.1016/j.patcog.2017.07.024 77 1 1 3 7 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2169-3536 IEEE ACCESS IEEE Access 2020 8 36664 36680 10.1109/ACCESS.2020.2975630 17 Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications Computer Science; Engineering; Telecommunications LB4OV WOS:000524616200010 gold 2021-09-15 C Zhang, JC; Inoue, N; Shinoda, K Int Speech Commun Assoc Zhang, Jiacen; Inoue, Nakamasa; Shinoda, Koichi I-vector Transformation Using Conditional Generative Adversarial Networks for Short Utterance Speaker Verification 19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES Interspeech English Proceedings Paper 19th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2018) AUG 02-SEP 06, 2018 Hyderabad, INDIA Int Speech Commun Assoc speaker verification; short utterance; i-vector transformation; generative adversarial networks; multi-task learning I-vector based text-independent speaker verification (SV) systems often have poor performance with short utterances, as the biased phonetic distribution in a short utterance makes the extracted i-vector unreliable. This paper proposes an i-vector compensation method using a generative adversarial network (GAN), where its generator network is trained to generate a compensated i-vector from a short-utterance i-vector and its discriminator network is trained to determine whether an i-vector is generated by the generator or the one extracted from a long utterance. Additionally, we assign two other learning tasks to the GAN to stabilize its training and to make the generated i-vector more speaker-specific. Speaker verification experiments on the NIST SRE 2008 "10sec-10sec" condition show that after applying our method, the equal error rate reduced by 11.3% from the conventional i-vector and PLDA system. [Zhang, Jiacen; Inoue, Nakamasa; Shinoda, Koichi] Tokyo Inst Technol, Tokyo, Japan Zhang, JC (corresponding author), Tokyo Inst Technol, Tokyo, Japan. jiacen@ks.c.titech.ac.jp; inoue@ks.c.titech.ac.jp; shinoda@c.titech.ac.jp Shinoda, Koichi/D-3198-2014 Shinoda, Koichi/0000-0003-1095-3203 JSPS KAKENHIMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of ScienceGrants-in-Aid for Scientific Research (KAKENHI) [16H02845]; JST CREST, JapanCore Research for Evolutional Science and Technology (CREST) [JPMJCR1687] This work was supported by JSPS KAKENHI 16H02845 and by JST CREST Grant Number JPMJCR1687, Japan. Arjovsky M., 2017, ARXIV170107875; Dehak N, 2011, IEEE T AUDIO SPEECH, V19, P788, DOI 10.1109/TASL.2010.2064307; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Ioffe S, 2006, LECT NOTES COMPUT SC, V3954, P531; Isola P, 2017, PROC CVPR IEEE, P5967, DOI 10.1109/CVPR.2017.632; Kanagasundaram A, 2014, SPEECH COMMUN, V59, P69, DOI 10.1016/j.specom.2014.01.004; Kanagasundaram A., 2011, INTERSPEECH 2011 12; Kenny P, 2013, INT CONF ACOUST SPEE, P7649, DOI 10.1109/ICASSP.2013.6639151; Lin WW, 2017, COMPUT SPEECH LANG, V45, P503, DOI 10.1016/j.csl.2017.02.009; Maas A. L., 2013, P ICML, V30, P3; Mahto S, 2017, INTERSPEECH, P3722, DOI 10.21437/Interspeech.2017-731; Mirza M, 2014, ARXIV14111784; Pascual S, 2017, INTERSPEECH, P3642, DOI 10.21437/Interspeech.2017-1428; Povey D., 2011, IEEE WORKSH AUT SPEE; PRINCE SJD, 2007, P ICCV, P1751; Senoussaoui M, 2010, ODYSSEY 2010: THE SPEAKER AND LANGUAGE RECOGNITION WORKSHOP, P28; Snyder D, 2016, IEEE W SP LANG TECH, P165, DOI 10.1109/SLT.2016.7846260; Tieleman T., 2012, COURSERA NEURAL NETW; Vesnicer B., 2014, P OD SPEAK LANG REC, P241; Villalba J, 2017, INTERSPEECH, P1004, DOI 10.21437/Interspeech.2017-1018; Yang IH, 2017, INT CONF ACOUST SPEE, P5490, DOI 10.1109/ICASSP.2017.7953206; 2015, P INT, P1052 22 3 3 0 2 ISCA-INT SPEECH COMMUNICATION ASSOC BAIXAS C/O EMMANUELLE FOXONET, 4 RUE DES FAUVETTES, LIEU DIT LOUS TOURILS, BAIXAS, F-66390, FRANCE 2308-457X 978-1-5108-7221-9 INTERSPEECH 2018 3613 3617 10.21437/Interspeech.2018-1680 5 Computer Science, Artificial Intelligence; Computer Science, Theory & Methods; Engineering, Electrical & Electronic Computer Science; Engineering BM5PH WOS:000465363900753 Green Submitted, Green Published 2021-09-15 J Rezaei, M; Yang, HJ; Meinel, C Rezaei, Mina; Yang, Haojin; Meinel, Christoph Recurrent generative adversarial network for learning imbalanced medical image semantic segmentation MULTIMEDIA TOOLS AND APPLICATIONS English Article Imbalanced medical image semantic segmentation; Recurrent generative adversarial network We propose a new recurrent generative adversarial architecture named RNN-GAN to mitigate imbalance data problem in medical image semantic segmentation where the number of pixels belongs to the desired object are significantly lower than those belonging to the background. A model trained with imbalanced data tends to bias towards healthy data which is not desired in clinical applications and predicted outputs by these networks have high precision and low recall. To mitigate imbalanced training data impact, we train RNN-GAN with proposed complementary segmentation mask, in addition, ordinary segmentation masks. The RNN-GAN consists of two components: a generator and a discriminator. The generator is trained on the sequence of medical images to learn corresponding segmentation label map plus proposed complementary label both at a pixel level, while the discriminator is trained to distinguish a segmentation image coming from the ground truth or from the generator network. Both generator and discriminator substituted with bidirectional LSTM units to enhance temporal consistency and get inter and intra-slice representation of the features. We show evidence that the proposed framework is applicable to different types of medical images of varied sizes. In our experiments on ACDC-2017, HVSMR-2016, and LiTS-2017 benchmarks we find consistently improved results, demonstrating the efficacy of our approach. [Rezaei, Mina; Yang, Haojin; Meinel, Christoph] Hasso Plattner Inst, Prof Dr Helmert St 2-3, Potsdam, Germany Rezaei, M (corresponding author), Hasso Plattner Inst, Prof Dr Helmert St 2-3, Potsdam, Germany. mina.rezaei@hpi.de; haojin.yang@hpi.de; christoph.meinel@hpi.de Rezaei, Mina/0000-0001-6994-6345 Abadi Martin, 2015, TENSORFLOW LARGE SCA; Afshin M, 2014, IEEE T MED IMAGING, V33, P481, DOI 10.1109/TMI.2013.2287793; Ahmaddy F, 2017, AUTOMATIC LIVER TUMO; Avola D, 2011, LECT NOTES COMPUT SC, V6979, P414, DOI 10.1007/978-3-642-24088-1_43; Avola D, 2008, APPLIED COMPUTING 2008, VOLS 1-3, P1338; Bernard O, 2018, IEEE T MED IMAGING, V37, P2514, DOI 10.1109/TMI.2018.2837502; Bi L, 2017, ARXIV170402703; Chollet F., 2015, KERAS; Ciecholewski M, 2011, LECT NOTES COMPUT SC, V6636, P432, DOI 10.1007/978-3-642-21073-0_38; Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Drozdzal M, 2018, MED IMAGE ANAL, V44, P1, DOI 10.1016/j.media.2017.11.005; Eslami A, 2013, MED IMAGE ANAL, V17, P236, DOI 10.1016/j.media.2012.10.005; Fidon L, 2018, LECT NOTES COMPUT SC, V10670, P64, DOI 10.1007/978-3-319-75238-9_6; Fischl B, 2004, NEUROIMAGE, V23, pS69, DOI 10.1016/j.neuroimage.2004.07.016; Goodfellow I.J., 2014, ARXIV PREPRINT ARXIV; Graves A, 2005, NEURAL NETWORKS, V18, P602, DOI 10.1016/j.neunet.2005.06.042; Han X., 2017, AUTOMATIC LIVER LESI; Hashemi S. R., 2018, ARXIV180311078; Inda MD, 2014, CANCERS, V6, P226, DOI 10.3390/cancers6010226; Isensee F, 2018, LECT NOTES COMPUT SC, V10663, P120, DOI 10.1007/978-3-319-75541-0_13; Ishida T, 2017, ADV NEURAL INFORM PR, P5639; Isola P, 2017, PROC CVPR IEEE, P5967, DOI 10.1109/CVPR.2017.632; Jang JinWoo, 2014, The Scientific World Journal, V2014, P536723; Kaur R, 2018, MULTIMED TOOLS APPL, P1; Kohl S., 2017, ARXIV170208014; LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539; Mahapatra D, 2014, J DIGIT IMAGING, V27, P794, DOI 10.1007/s10278-014-9705-0; Moeskops P., 2017, ARXIV170703195; Nasr G. E., 2002, FLAIRS C, P381; Osindero S., 2014, CONDITIONAL GENERATI; Pathak D, 2016, PROC CVPR IEEE, P2536, DOI 10.1109/CVPR.2016.278; Peng P, 2016, MAGN RESON MATER PHY, V29, P155, DOI 10.1007/s10334-015-0521-4; Pohl KA, 2006, NEUROIMAGE, V31, P228, DOI 10.1016/j.neuroimage.2005.11.044; Poudel RPK, 2017, LECT NOTES COMPUT SC, V10129, P83, DOI 10.1007/978-3-319-52280-7_8; Prabhu V, 2018, MULTIMED TOOLS APPL, V77, P10375, DOI 10.1007/s11042-018-5792-0; Qiu Q, 2018, PROCEEDINGS OF 2018 INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS PROCESSING (ICIGP 2018), P78, DOI 10.1145/3191442.3191458; Rohe MM, 2018, LECT NOTES COMPUT SC, V10663, P170, DOI 10.1007/978-3-319-75541-0_18; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Rota Bulo S., 2017, P IEEE C COMP VIS PA, P2126; Shahzad R, 2017, LECT NOTES COMPUT SC, V10129, P147, DOI 10.1007/978-3-319-52280-7_15; Sudre CH, 2017, LECT NOTES COMPUT SC, V10553, P240, DOI 10.1007/978-3-319-67558-9_28; Tustison NJ, 2010, IEEE T MED IMAGING, V29, P1310, DOI 10.1109/TMI.2010.2046908; Vorontsov E, 2018, I S BIOMED IMAGING, P1332, DOI 10.1109/ISBI.2018.8363817; Wolterink J. M., 2017, P INT WORKSH STAT AT; Wolterink JM, 2016, RECONSTRUCTION SEGME, P95; Xu J, 2014, PROC CVPR IEEE, P3190, DOI 10.1109/CVPR.2014.408; Xue Y., 2017, ARXIV170601805; Yu L, 2016, RECONSTRUCTION SEGME, P103; Yu X., 2018, P EUR C COMP VIS ECC; Zhang YS, 2018, MULTIDIM SYST SIGN P, V29, P999, DOI 10.1007/s11045-017-0482-z; Zhang YB, 2020, MIN PROC EXT MET REV, V41, P75, DOI 10.1080/08827508.2018.1538986; Zhou YP, 2016, LECT NOTES COMPUT SC, V9912, P262, DOI 10.1007/978-3-319-46484-8_16; Zhu JY, 2017, IEEE I CONF COMP VIS, P2242, DOI 10.1109/ICCV.2017.244; Zhu W., 2016, ARXIV161205970; Zotti C., 2017, ARXIV170508943 55 6 7 1 9 SPRINGER DORDRECHT VAN GODEWIJCKSTRAAT 30, 3311 GZ DORDRECHT, NETHERLANDS 1380-7501 1573-7721 MULTIMED TOOLS APPL Multimed. Tools Appl. JUN 2020 79 21-22 15329 15348 10.1007/s11042-019-7305-1 20 Computer Science, Information Systems; Computer Science, Software Engineering; Computer Science, Theory & Methods; Engineering, Electrical & Electronic Computer Science; Engineering LV8HQ WOS:000538675900055 2021-09-15 J Na, BJ; Son, S Na, Byoungjoon; Son, Sangyoung Prediction of atmospheric motion vectors around typhoons using generative adversarial network JOURNAL OF WIND ENGINEERING AND INDUSTRIAL AERODYNAMICS English Article Wind velocity; Atmospheric motion vectors; Generative adversarial network; Particle image velocimetry; Satellite images TROPICAL CYCLONE TRACK; CLOUD TRACKING; TROPOSPHERIC WINDS; STORM-SURGE; FORECASTS; IMPACT; INITIATION; FIELDS; MODEL In this study, atmospheric motion vectors (AMVs) were derived from the satellite images predicted using a generative adversarial network (GAN) and a deep multi-scale frame prediction algorithm. The GAN was trained and tested with a sequence of the satellite images of a COMS satellite infrared-window channel under the 68 tropical cyclones. The inputs of the consecutive satellite images with 15-min interval were then processed using the trained GAN model to generate satellite images in the next time steps. To further enhance the model's predictability, particle image velocimetry based on the theory of cross-correlation schemes was employed to the GAN-generated satellite image sequence and AMVs were produced. The GAN-derived AMVs were validated with the wind fields based on the numerical weather prediction (NWP) and radiosonde observations. The comparisons showed that the GAN-derived AMVs depicted the structure of atmospheric circulations with a certain level of accuracy. Through comparison with the radiosonde observations, the root-mean-square error and the wind speed bias of the GAN-derived AMVs were comparable to, and even smaller than those of the NWP-derived wind fields. The current approach may enhance the accuracy in predicting short-term wind velocity fields, which in turn may provide more realistic inputs in storm surge modeling. [Na, Byoungjoon] Korea Univ, Future & Fus Lab Architectural Civil & Environm E, Seoul 02841, South Korea; [Son, Sangyoung] Korea Univ, Sch Civil Environm & Architectural Engn, Seoul 02841, South Korea Son, S (corresponding author), Korea Univ, Sch Civil Environm & Architectural Engn, Seoul 02841, South Korea. sson@korea.ac.kr Na, Byoungjoon/B-8280-2017; Son, Sangyoung/M-7939-2013 Na, Byoungjoon/0000-0002-8291-007X; Son, Sangyoung/0000-0002-2819-5140 National Research Foundation of KoreaNational Research Foundation of Korea [2020R1C1C100513311] This research was supported by the National Research Foundation of Korea (NRF-2019H1D3A1A01070722) and the National Research Foundation of Korea (2020R1C1C100513311) . Abidi M.A., 1988, P SPIE, V846, P54, DOI [10.1117/12.942644, DOI 10.1117/12.942644]; Alemany S., 2018, ARXIV180202548; Bedka KM, 2005, J APPL METEOROL, V44, P1761, DOI 10.1175/JAM2264.1; Berger H, 2011, J APPL METEOROL CLIM, V50, P2309, DOI 10.1175/JAMC-D-11-019.1; Borde R, 2014, J ATMOS OCEAN TECH, V31, P33, DOI 10.1175/JTECH-D-13-00126.1; Bresky WC, 2012, J APPL METEOROL CLIM, V51, P2137, DOI 10.1175/JAMC-D-11-0234.1; Cardone VJ, 2009, NAT HAZARDS, V51, P29, DOI 10.1007/s11069-009-9369-0; Cherubini T, 2006, MON WEATHER REV, V134, P2009, DOI 10.1175/MWR3163.1; Chu DD, 2019, ESTUAR COAST SHELF S, V231, DOI 10.1016/j.ecss.2019.106460; Chuang WL, 2019, J APPL METEOROL CLIM, V58, P199, DOI 10.1175/JAMC-D-18-0105.1; Endlich R., 1971, J APPL METEOROL CLIM, V10, P105, DOI 2.0.CO;2; ENDLICH RM, 1981, J APPL METEOROL, V20, P309, DOI 10.1175/1520-0450(1981)020<0309:ACTATG>2.0.CO;2; Fleming J. G., 2008, ESTUAR COAST MODEL, VX, P373, DOI DOI 10.1061/40990(324)48; Giffard-Roisin S., 2018, P CLIM INF WORKSH; Giffard-Roisin S., 2018, P 32 C NEURIPS; Github, 2019, ADV VID GEN; Goerss JS, 2009, MON WEATHER REV, V137, P41, DOI 10.1175/2008MWR2601.1; Harper B., 2008, WORLD METEOROLOGICAL; He JY, 2021, J WIND ENG IND AEROD, V208, DOI 10.1016/j.jweia.2020.104445; He YC, 2016, J WIND ENG IND AEROD, V152, P1, DOI 10.1016/j.jweia.2016.01.009; HOLLAND GJ, 1980, MON WEATHER REV, V108, P1212, DOI 10.1175/1520-0493(1980)108<1212:AAMOTW>2.0.CO;2; Holmlund K, 1998, WEATHER FORECAST, V13, P1093, DOI 10.1175/1520-0434(1998)013<1093:TUOSPO>2.0.CO;2; Hong S., 2017, ARXIV170803417; Houston SH, 1999, WEATHER FORECAST, V14, P671, DOI 10.1175/1520-0434(1999)014<0671:COHASS>2.0.CO;2; Hu G, 2020, J WIND ENG IND AEROD, V201, DOI 10.1016/j.jweia.2020.104138; Hwang S, 2020, NAT HAZARDS, V104, P1389, DOI 10.1007/s11069-020-04225-z; Kim Y, 2021, COAST ENG, V165, DOI 10.1016/j.coastaleng.2021.103840; Kordmahalleh MM, 2016, GECCO'16: PROCEEDINGS OF THE 2016 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, P957, DOI 10.1145/2908812.2908834; Kossin JP, 2018, NATURE, V558, P104, DOI 10.1038/s41586-018-0158-3; Langland RH, 2009, MON WEATHER REV, V137, P1615, DOI 10.1175/2008MWR2627.1; Le Marshall J, 2017, J SO HEMISPH EARTH, V67, P12, DOI 10.22499/3.6701.002; Lee C, 2019, ESTUAR COAST SHELF S, V221, P104, DOI 10.1016/j.ecss.2019.03.021; Lee RST, 2000, IEEE T NEURAL NETWOR, V11, P680, DOI 10.1109/72.846739; Lim HJ, 2015, J GEOPHYS RES-OCEANS, V120, P2007, DOI 10.1002/2014JC010269; Lotter W., 2017, INT C LEARN REPR ICL; Mathieu M., 2015, ARXIV1511; Mecikalski JR, 2010, J APPL METEOROL CLIM, V49, P2544, DOI 10.1175/2010JAMC2480.1; Menzel WP, 2001, B AM METEOROL SOC, V82, P33, DOI 10.1175/1520-0477(2001)082<0033:CTWSIF>2.3.CO;2; Na B, 2016, J GEOPHYS RES-OCEANS, V121, P2980, DOI 10.1002/2015JC011377; Nieman SJ, 1997, B AM METEOROL SOC, V78, P1121, DOI 10.1175/1520-0477(1997)078<1121:FACDWI>2.0.CO;2; Oh SM, 2019, REMOTE SENS-BASEL, V11, DOI 10.3390/rs11172054; Powell MD, 2010, OCEAN ENG, V37, P26, DOI 10.1016/j.oceaneng.2009.08.014; Ruttgers M, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-42339-y; Ryu Y, 2005, MEAS SCI TECHNOL, V16, P1945, DOI 10.1088/0957-0233/16/10/009; SCHMETZ J, 1993, J APPL METEOROL, V32, P1206, DOI 10.1175/1520-0450(1993)032<1206:OCMWFM>2.0.CO;2; Theunissen R, 2007, MEAS SCI TECHNOL, V18, P275, DOI 10.1088/0957-0233/18/1/034; Thielicke W., 2014, J OPEN RES STW, V2, P30, DOI [10.5334/jors.bl, DOI 10.5334/JORS.BL.VIEW, DOI 10.5334/JORS.AI]; Torres MJ, 2019, J WATERW PORT COAST, V145, DOI 10.1061/(ASCE)WW.1943-5460.0000496; VEGARIVEROS JF, 1989, IEE PROC-I, V136, P397, DOI 10.1049/ip-i-2.1989.0060; Velden C, 2005, B AM METEOROL SOC, V86, P205, DOI 10.1175/BAMS-86-2-205; Velden C, 2017, MON WEATHER REV, V145, P1107, DOI 10.1175/MWR-D-16-0229.1; Velden CS, 1997, B AM METEOROL SOC, V78, P173, DOI 10.1175/1520-0477(1997)078<0173:UTWDFG>2.0.CO;2; Weckwerth TM, 2006, MON WEATHER REV, V134, P5, DOI 10.1175/MWR3067.1; Wieneke B., 2010, 15 INT S APPL LAS TE; WOLF DE, 1977, J APPL METEOROL, V16, P1219, DOI 10.1175/1520-0450(1977)016<1219:EIACTU>2.0.CO;2; Wu TC, 2015, MON WEATHER REV, V143, P2506, DOI 10.1175/MWR-D-14-00220.1; Zhang Y., 2018, INT JOINT C NEUR NET 57 1 1 1 1 ELSEVIER AMSTERDAM RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS 0167-6105 1872-8197 J WIND ENG IND AEROD J. Wind Eng. Ind. Aerodyn. JUL 2021 214 104643 10.1016/j.jweia.2021.104643 14 Engineering, Civil; Mechanics Engineering; Mechanics SV4HB WOS:000663780300001 2021-09-15 J Du, QQ; Qiang, Y; Yang, WK; Wang, YF; Ma, Y; Zia, MB Du, Qianqian; Qiang, Yan; Yang, Wenkai; Wang, Yanfei; Ma, Yong; Zia, Muhammad Bilal DRGAN: a deep residual generative adversarial network for PET image reconstruction IET IMAGE PROCESSING English Article computer vision; image reconstruction; image representation; positron emission tomography; medical image processing; image resolution; neural nets; PET image reconstruction; positron emission tomography image reconstruction; low-count projection data; physical effects; inverse problem; computer vision tasks; medical imaging; DRGAN; PET image quality; residual PET map; RPM; image representation; anatomically realistic PET images; residual dense connections; simulation data; clinical PET data; deep residual generative adversarial network; streaking artefact reduction; pixel shuffle operations DETECTORS; ALGORITHM Positron emission tomography (PET) image reconstruction from low-count projection data and physical effects is challenging because the inverse problem is ill-posed and the resultant image is usually noisy. Recently, generative adversarial networks (GANs) have also shown their superior performance in many computer vision tasks and attracted growing interests in medical imaging. In this work, the authors proposed a novel model [deep residual generative adversarial network (DRGAN)] based on GANs for the reduction of streaking artefacts and the improvement of PET image quality. An innovative feature of the proposed method is that the authors trained a generator to produce 'residual PET map' (RPM) for image representation, rather than generate PET images directly. DRGAN used two discriminators (critics) to enforce anatomically realistic PET images and RPM. To better boost the contextual information, the authors designed residual dense connections followed with pixel shuffle operations (RDPS blocks) that encourage feature reuse and prevent losing resolution. Both simulation data and real clinical PET data are used to evaluate the proposed method. Compared with other state-of-the-art methods, the quantification results show that DRGAN can achieve better performance in bias-variance trade-off and provide comparable image quality. Their results were rigorously evaluated by one radiologist at the Shanxi Cancer Hospital. [Du, Qianqian; Qiang, Yan; Yang, Wenkai; Wang, Yanfei; Zia, Muhammad Bilal] Taiyuan Univ Technol, Coll Informat & Comp, Taiyuan 030024, Peoples R China; [Ma, Yong] Shanxi Canc Hosp, Dept Thorac Surg, Taiyuan 030024, Peoples R China Qiang, Y (corresponding author), Taiyuan Univ Technol, Coll Informat & Comp, Taiyuan 030024, Peoples R China. qiangyan@tyut.edu.cn National Natural Science Foundation of China (NSFC)National Natural Science Foundation of China (NSFC) [61872261, 201801D121139]; Provincial Department of Science and Technology (Shanxi, China)Department of Science & Technology (DOST), Philippines This work was supported by the National Natural Science Foundation of China (NSFC) under grant number 61872261 and the basic research (201801D121139, Development of Novel Artificial Intelligence Technologies to Assist Imaging Diagnosis of Pulmonary) funded by the Provincial Department of Science and Technology (Shanxi, China). Aharon M, 2006, IEEE T SIGNAL PROCES, V54, P4311, DOI 10.1109/TSP.2006.881199; Arjovsky M., 2017, ARXIV170107875, P214; Arjovsky M., 2017, ARXIV170104862; Bagci U, 2013, LECT NOTES COMPUT SC, V8151, P115, DOI 10.1007/978-3-642-40760-4_15; Branderhorst W, 2010, PHYS MED BIOL, V55, P2023, DOI 10.1088/0031-9155/55/7/015; Brock A, 2016, PROCEEDINGS OF THE ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, 2016, VOL 1B; Buades A, 2005, PROC CVPR IEEE, P60, DOI 10.1109/cvpr.2005.38; Chen H, 2017, IEEE T MED IMAGING, V36, P2524, DOI 10.1109/TMI.2017.2715284; Chen H, 2017, BIOMED OPT EXPRESS, V8, P679, DOI 10.1364/BOE.8.000679; Chintala S., 2015, UNSUPERVISED REPRESE, DOI DOI 10.1051/0004-6361/201527329; Dabov K, 2006, PROC SPIE, V6064, DOI 10.1117/12.643267; Dong H, 2017, PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), P1201, DOI 10.1145/3123266.3129391; DUTTA J, 2013, PLOS ONE, V0008; Gong K, 2019, IEEE T RADIAT PLASMA, V3, P153, DOI 10.1109/TRPMS.2018.2877644; Gong K, 2019, IEEE T MED IMAGING, V38, P675, DOI 10.1109/TMI.2018.2869871; Gong K, 2016, PHYS MED BIOL, V61, P3681, DOI 10.1088/0031-9155/61/10/3681; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; He K., 2016, PROC CVPR IEEE, P770, DOI DOI 10.1109/CVPR.2016.90; He KM, 2015, IEEE I CONF COMP VIS, P1026, DOI 10.1109/ICCV.2015.123; Huang G, 2017, PROC CVPR IEEE, P2261, DOI 10.1109/CVPR.2017.243; Huang X, 2017, PROC CVPR IEEE, P1866, DOI 10.1109/CVPR.2017.202; Humm JL, 2003, EUR J NUCL MED MOL I, V30, P1574, DOI 10.1007/s00259-003-1266-2; Hunt B.R., 1981, J SIAM REV, V23, P142; Isola P, 2017, ARXIV161107004V, DOI DOI 10.1109/CVPR.2017.632; Jiao J., 2017, ARXIV170407244; Johnson J, 2016, LECT NOTES COMPUT SC, V9906, P694, DOI 10.1007/978-3-319-46475-6_43; Karp JS, 2008, J NUCL MED, V49, P462, DOI 10.2967/jnumed.107.044834; Kim K, 2018, IEEE T MED IMAGING, V37, P1478, DOI 10.1109/TMI.2018.2832613; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Kodali N., 2017, ARXIV170507215; LANGE K, 1984, J COMPUT ASSIST TOMO, V8, P306; Le Pogam A, 2013, MED IMAGE ANAL, V17, P877, DOI 10.1016/j.media.2013.05.005; Ledig C, 2017, PROC CVPR IEEE, P105, DOI 10.1109/CVPR.2017.19; Nie Dong, 2017, Med Image Comput Comput Assist Interv, V10435, P417, DOI 10.1007/978-3-319-66179-7_48; Noh H, 2015, IEEE I CONF COMP VIS, P1520, DOI 10.1109/ICCV.2015.178; Panin VY, 2011, IEEE NUCL SCI CONF R, P2986, DOI 10.1109/NSSMIC.2011.6152534; Poon JK, 2012, PHYS MED BIOL, V57, P4077, DOI 10.1088/0031-9155/57/13/4077; Portilla J, 2003, IEEE T IMAGE PROCESS, V12, P1338, DOI 10.1109/TIP.2003.818640; Qiao JJ, 2019, IET IMAGE PROCESS, V13, P2673, DOI 10.1049/iet-ipr.2018.6570; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Royzman EB, 2015, JUDGM DECIS MAK, V10, P296; Russakovsky O, 2015, INT J COMPUT VISION, V115, P211, DOI 10.1007/s11263-015-0816-y; Shi WZ, 2016, PROC CVPR IEEE, P1874, DOI 10.1109/CVPR.2016.207; Simard PY, 2003, PROC INT CONF DOC, P958; Simonyan K., 2014, ARXIV PREPRINT; Socher R., 2013, ADV NEURAL INFORM PR, DOI DOI 10.1007/978-3-319-46478-7; Wang CY, 2014, IEEE ENG MED BIO, P1917, DOI 10.1109/EMBC.2014.6943986; Wang G, 2017, MED PHYS, V44, P2041, DOI 10.1002/mp.12204; Wang G, 2016, IEEE ACCESS, V4, P8914, DOI 10.1109/ACCESS.2016.2624938; Wang GB, 2012, IEEE T MED IMAGING, V31, P2194, DOI 10.1109/TMI.2012.2211378; Wang JY, 2019, J ENG-JOE, V2019, P8093, DOI 10.1049/joe.2019.0696; Wang Y, 2018, LECT NOTES COMPUT SC, V11070, P329, DOI 10.1007/978-3-030-00928-1_38; Wolterink JM, 2017, IEEE T MED IMAGING, V36, P2536, DOI 10.1109/TMI.2017.2708987; Yang G, 2018, IEEE T MED IMAGING, V37, P1310, DOI 10.1109/TMI.2017.2785879; Yang Q, 2017, ARXIV170207019; Yang YF, 2009, PHYS MED BIOL, V54, P433, DOI 10.1088/0031-9155/54/2/017; Yu F., 2016, ICLR, DOI DOI 10.16373/J.CNKI.AHR.150049.1511.07122; Yu S., 2016, INT C BIOM ENG IBIOM, P1, DOI DOI 10.1109/IBIOMED.2016.7869821; Zhang K, 2017, IEEE T IMAGE PROCESS, V26, P3142, DOI 10.1109/TIP.2017.2662206; Zhu JY, 2016, LECT NOTES COMPUT SC, V9909, P597, DOI 10.1007/978-3-319-46454-1_36 60 2 2 2 12 WILEY HOBOKEN 111 RIVER ST, HOBOKEN 07030-5774, NJ USA 1751-9659 1751-9667 IET IMAGE PROCESS IET Image Process. JUL 20 2020 14 9 1690 1700 10.1049/iet-ipr.2019.1107 11 Computer Science, Artificial Intelligence; Engineering, Electrical & Electronic; Imaging Science & Photographic Technology Computer Science; Engineering; Imaging Science & Photographic Technology MU9EP WOS:000555968900002 Bronze 2021-09-15 C Bao, SY; Wang, ZW; Liu, TY; Chen, DQ; Cai, YM; Huang, R Claeys, C; Liang, S; Lin, Q; Huang, R; Wu, H; Song, P; Lai, K; Zhang, Y; Zang, B; Qu, X; Lung, HL; Yu, W Bao, Shengyu; Wang, Zongwei; Liu, Tianyi; Chen, Daqin; Cai, Yimao; Huang, Ru IMPACT OF CIRCUIT LIMIT AND DEVICE NOISE ON RRAM BASED CONDITIONAL GENERATIVE ADVERSARIAL NETWORK 2020 CHINA SEMICONDUCTOR TECHNOLOGY INTERNATIONAL CONFERENCE 2020 (CSTIC 2020) English Proceedings Paper China Semiconductor Technology International Conference (CSTIC) JUN 29-JUL 17, 2020 ELECTR NETWORK Semiconductor Equipment & Mat Int, IMEC, Integrated Circuit Mat Ind Technol Innovat Alliance, IEEE Electron Devices Soc CGAN; RRAM; Read Noise In this work, a Conditional Generative Adversarial Network (CGAN) [1] is demonstrated based on the Resistive Random Access Memory (RRAM). During training, the read noise of RRAM is utilized as a random bias source to enrich the diversity of the generator in CGAN. Further, we evaluate the impact of both read noise (RRAM as weight storage cell) and the resolution of the AD/DA circuit on the performance of CGAN through a comprehensive simulation. [Bao, Shengyu; Wang, Zongwei; Liu, Tianyi; Chen, Daqin; Cai, Yimao; Huang, Ru] Peking Univ, Inst Microelect, Beijing 100871, Peoples R China; [Wang, Zongwei; Huang, Ru] Peking Univ, Key Lab Microelect Devices & Circuits, Beijing 100871, Peoples R China; [Cai, Yimao] Peking Univ, Frontiers Sci Ctr Nanooptoelect, Beijing 100871, Peoples R China Wang, ZW; Cai, YM (corresponding author), Peking Univ, Inst Microelect, Beijing 100871, Peoples R China.; Wang, ZW (corresponding author), Peking Univ, Key Lab Microelect Devices & Circuits, Beijing 100871, Peoples R China.; Cai, YM (corresponding author), Peking Univ, Frontiers Sci Ctr Nanooptoelect, Beijing 100871, Peoples R China. wangzongwei@pku.edu.cn; caiyimao@pku.edu.cn National Key Research and Development Project [2018YFB1107701]; National Natural Science Foundation of ChinaNational Natural Science Foundation of China (NSFC) [61834001, 61904003, 61421005]; "111" ProjectMinistry of Education, China - 111 Project [B18001]; China Postdoctoral Science FoundationChina Postdoctoral Science Foundation [2019M650340] This work was supported in part by the National Key Research and Development Project under grant No. 2018YFB1107701, in part by the National Natural Science Foundation of China under grant No. 61834001, No. 61904003, No. 61421005, and in part by the "111" Project under grant No. B18001. Z. W. acknowledges the support from China Postdoctoral Science Foundation (No. 2019M650340). Kang J, 2017, INT EL DEVICES MEET; Mirza M., 2014, ARXIV14111784; Salakhutdinov R. R, 2017, P ADV NEUR INF PROC, P6510; Wang ZW, 2016, NANOSCALE, V8, P14015, DOI 10.1039/c6nr00476h 4 0 0 0 0 IEEE NEW YORK 345 E 47TH ST, NEW YORK, NY 10017 USA 978-1-7281-6558-5 2020 3 Computer Science, Hardware & Architecture; Engineering, Manufacturing; Engineering, Electrical & Electronic; Nanoscience & Nanotechnology; Materials Science, Multidisciplinary Computer Science; Engineering; Science & Technology - Other Topics; Materials Science BS0GR WOS:000682768500160 2021-09-15 J Prykhodko, O; Johansson, SV; Kotsias, PC; Arus-Pous, J; Bjerrum, EJ; Engkvist, O; Chen, HM Prykhodko, Oleksii; Johansson, Simon Viet; Kotsias, Panagiotis-Christos; Arus-Pous, Josep; Bjerrum, Esben Jannik; Engkvist, Ola; Chen, Hongming A de novo molecular generation method using latent vector based generative adversarial network JOURNAL OF CHEMINFORMATICS English Article Molecular design; Autoencoder networks; Generative adversarial networks; Deep learning DRUG DISCOVERY; DATABASE; DESIGN Deep learning methods applied to drug discovery have been used to generate novel structures. In this study, we propose a new deep learning architecture, LatentGAN, which combines an autoencoder and a generative adversarial neural network for de novo molecular design. We applied the method in two scenarios: one to generate random drug-like compounds and another to generate target-biased compounds. Our results show that the method works well in both cases. Sampled compounds from the trained model can largely occupy the same chemical space as the training set and also generate a substantial fraction of novel compounds. Moreover, the drug-likeness score of compounds sampled from LatentGAN is also similar to that of the training set. Lastly, generated compounds differ from those obtained with a Recurrent Neural Network-based generative model approach, indicating that both methods can be used complementarily. [Prykhodko, Oleksii; Johansson, Simon Viet; Kotsias, Panagiotis-Christos; Arus-Pous, Josep; Bjerrum, Esben Jannik; Engkvist, Ola; Chen, Hongming] AstraZeneca, Biopharmaceut R&D, Discovery Sci, Hit Discovery, Gothenburg, Sweden; [Arus-Pous, Josep] Univ Bern, Dept Chem & Biochem, Bern, Switzerland; [Prykhodko, Oleksii; Johansson, Simon Viet] Chalmers Univ Technol, Dept Comp Sci & Engn, Gothenburg, Sweden; [Chen, Hongming] Chem & Chem Biol Ctr, Guangzhou Regenerat Med & Hlth Guangdong Lab, Sci Pk, Guangzhou, Peoples R China Johansson, SV; Chen, HM (corresponding author), AstraZeneca, Biopharmaceut R&D, Discovery Sci, Hit Discovery, Gothenburg, Sweden.; Johansson, SV (corresponding author), Chalmers Univ Technol, Dept Comp Sci & Engn, Gothenburg, Sweden.; Chen, HM (corresponding author), Chem & Chem Biol Ctr, Guangzhou Regenerat Med & Hlth Guangdong Lab, Sci Pk, Guangzhou, Peoples R China. Simon.johansson@astrazeneca.com; chen71@hotmail.com ; Bjerrum, Esben Jannik/M-5600-2014 Johansson, Simon Viet/0000-0001-9139-6378; Engkvist, Ola/0000-0003-4970-6461; Bjerrum, Esben Jannik/0000-0003-1614-7376; Arus-Pous, Josep/0000-0002-9860-2944 European UnionEuropean Commission [676434] Josep Arus-Pous is supported financially by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie Grant Agreement No. 676434, "Big Data in Chemistry" ("BIGCHEM," http://bigch em.eu). Arus-Pous J, 2019, J CHEMINFORMATICS, V11, DOI 10.1186/s13321-019-0393-0; Arus-Pous J, 2019, J CHEMINFORMATICS, V11, DOI 10.1186/s13321-019-0341-z; Bemis GW, 1996, J MED CHEM, V39, P2887, DOI 10.1021/jm9602928; Bickerton GR, 2012, NAT CHEM, V4, P90, DOI [10.1038/nchem.1243, 10.1038/NCHEM.1243]; Bjerrum EJ, 2018, BIOMOLECULES, V8, DOI 10.3390/biom8040131; Blaschke T, 2018, MOL INFORM, V37, DOI 10.1002/minf.201700123; Chen HM, 2018, DRUG DISCOV TODAY, V23, P1241, DOI 10.1016/j.drudis.2018.01.039; Chen HM, 2018, MOL INFORM, V37, DOI 10.1002/minf.201800041; De Cao N., 2018, MOLGAN IMPLICIT GENE; Ekins S, 2016, PHARM RES-DORDR, V33, P2594, DOI 10.1007/s11095-016-2029-7; Ertl P, 2009, J CHEMINFORMATICS, V1, DOI 10.1186/1758-2946-1-8; Gaulton A, 2017, NUCLEIC ACIDS RES, V45, pD945, DOI 10.1093/nar/gkw1074; Gawehn E, 2016, MOL INFORM, V35, P3, DOI 10.1002/minf.201501008; Gomez-Bombarelli R, 2018, ACS CENTRAL SCI, V4, P268, DOI 10.1021/acscentsci.7b00572; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Graves A, 2016, NATURE, V538, P471, DOI 10.1038/nature20101; Hessler G, 2018, MOLECULES, V23, DOI 10.3390/molecules23102520; Irwin JJ, 2005, J CHEM INF MODEL, V45, P177, DOI 10.1021/ci049714+; Kadurin A, 2017, ONCOTARGET, V8, P10883, DOI 10.18632/oncotarget.14073; Karras T, 2017, ARXIV171010196; Kotsias P.-C., 2019, DIRECT STEERING NOVO; KULLBACK S, 1951, ANN MATH STAT, V22, P79, DOI 10.1214/aoms/1177729694; Landrum G., 2014, RDKIT OPEN SOURCE CH; Li Y, 2018, LEARNING DEEP GENERA, P1; Lim J, 2018, J CHEMINFORMATICS, V10, DOI 10.1186/s13321-018-0286-7; Lipinski CA, 1997, ADV DRUG DELIVER REV, V23, P3, DOI 10.1016/S0169-409X(96)00423-1; Lo YC, 2018, DRUG DISCOV TODAY, V23, P1538, DOI 10.1016/j.drudis.2018.05.010; Luo Yun, 2018, Annu Int Conf IEEE Eng Med Biol Soc, V2018, P2535, DOI 10.1109/EMBC.2018.8512865; Luo ZR, 2018, ADV MECH ENG, V10, P1, DOI 10.1177/1687814018785286; Nguyen KT, 2009, CHEMMEDCHEM, V4, P1803, DOI 10.1002/cmdc.200900317; Olivecrona M, 2017, J CHEMINFORMATICS, V9, DOI 10.1186/s13321-017-0235-x; Pedregosa F, 2011, J MACH LEARN RES, V12, P2825; Polykovskiy D, MOSES GITHUB REPOSIT; Polykovskiy D, 2018, MOL PHARMACEUT, V15, P4398, DOI 10.1021/acs.molpharmaceut.8b00839; Preuer K, 2018, J CHEM INF MODEL, V58, P1736, DOI 10.1021/acs.jcim.8b00234; Putin E, 2018, J CHEM INF MODEL, V58, P1194, DOI 10.1021/acs.jcim.7b00690; Putin E, 2018, MOL PHARMACEUT, V15, P4386, DOI 10.1021/acs.molpharmaceut.7b01137; Schneider G, 2011, FUTURE MED CHEM, V3, P415, DOI [10.4155/FMC.11.8, 10.4155/fmc.11.8]; Schneider P, 2016, J MED CHEM, V59, P4077, DOI 10.1021/acs.jmedchem.5b01849; Segler MHS, 2018, ACS CENTRAL SCI, V4, P120, DOI 10.1021/acscentsci.7b00512; Sun JM, 2017, J CHEMINFORMATICS, V9, DOI 10.1186/s13321-017-0222-2; Voss C, 2015, MODELING MOL RECURRE; WEININGER D, 1988, J CHEM INF COMP SCI, V28, P31, DOI 10.1021/ci00057a005; Williams RJ, 1989, NEURAL COMPUT, V1, P270, DOI 10.1162/neco.1989.1.2.270; You J., 2018, GRAPH CONVOLUTIONAL 45 34 34 3 9 BMC LONDON CAMPUS, 4 CRINAN ST, LONDON N1 9XW, ENGLAND 1758-2946 J CHEMINFORMATICS J. Cheminformatics DEC 6 2019 11 1 74 10.1186/s13321-019-0397-9 13 Chemistry, Multidisciplinary; Computer Science, Information Systems; Computer Science, Interdisciplinary Applications Chemistry; Computer Science KP9SI WOS:000516570500001 33430938 Green Published, gold 2021-09-15 J Han, X; Xue, L; Shao, FC; Xu, Y Han, Xu; Xue, Lei; Shao, Fucai; Xu, Ying A Power Spectrum Maps Estimation Algorithm Based on Generative Adversarial Networks for Underlay Cognitive Radio Networks SENSORS English Article underlay cognitive radio networks; power spectrum maps estimation; deep learning; generative adversarial networks; image reconstruction In the underlay cognitive radio networks, the main challenge in detecting the idle radio resources is to estimate the power spectrum maps (PSMs), where the radio propagation characteristics are hard to obtain. For this reason, we propose a novel PSMs estimation algorithm based on the generative adversarial networks (GANs). First, we constructed the PSMs estimation model as a regression model in deep learning. Then, we converted the estimation task into an image reconstruction task by image color mapping. We fulfilled the above task by designing an image generator and an image discriminator in the proposed maps' estimation GANs (MEGANs). The generator is trained to extract the radio propagation characteristics and generate the PSMs images. However, the discriminator is trained to identify the generated images and help to improve the generator's performance. With the training process of MEGANs, the abilities of the generator and the discriminator are enhanced continually until reaching a balance, which means a high-accuracy PSMs estimation is achieved. The proposed MEGANs algorithm learns and utilizes accurate radio propagation features from the training process rather than making direct imprecise or biased propagation assumptions as in the traditional methods. Simulation results demonstrate that the MEGANs algorithm provides a more accurate estimation performance than the conventional methods. [Han, Xu; Xue, Lei; Xu, Ying] Natl Univ Def Technol, Elect Countermeasure Coll, Hefei 230037, Peoples R China; [Shao, Fucai] Beijing Mil Representat Off, Beijing 100191, Peoples R China Xu, Y (corresponding author), Natl Univ Def Technol, Elect Countermeasure Coll, Hefei 230037, Peoples R China. hanxu17@nudt.edu.cn; lei_xue1020@163.com; fucai_shao@126.com; xu_ying1020@126.com Ahmad A, 2015, IEEE COMMUN SURV TUT, V17, P888, DOI 10.1109/COMST.2015.2401597; Alaya-Feki A., 2008, IEEE 19 INT S PERS I, P1; Bazerque JA, 2011, IEEE T SIGNAL PROCES, V59, P4648, DOI 10.1109/TSP.2011.2160858; Bazerque JA, 2011, INT CONF ACOUST SPEE, P2992; Bazerque JA, 2010, IEEE T SIGNAL PROCES, V58, P1847, DOI 10.1109/TSP.2009.2038417; Bi J., 2018, P 2018 INT C IND POS, P1; Ding GR, 2016, IEEE J SEL AREA COMM, V34, P107, DOI 10.1109/JSAC.2015.2452532; El Tanab M, 2017, IEEE COMMUN SURV TUT, V19, P1249, DOI 10.1109/COMST.2016.2631079; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gulrajani I., 2017, ADV NEURAL INFORM PR, V30, P5767; Han B, 2019, SENSORS-BASEL, V19, DOI 10.3390/s19204493; HIROSE Y, 1991, NEURAL NETWORKS, V4, P61, DOI 10.1016/0893-6080(91)90032-Z; Jan SS, 2015, SENSORS-BASEL, V15, P21377, DOI 10.3390/s150921377; Jin KH, 2017, IEEE T IMAGE PROCESS, V26, P4509, DOI 10.1109/TIP.2017.2713099; LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539; Liu WB, 2017, NEUROCOMPUTING, V234, P11, DOI 10.1016/j.neucom.2016.12.038; Lu ZL, 2011, IEICE T FUND ELECTR, VE94A, P1608, DOI 10.1587/transfun.E94.A.1608; Mi Y, 2019, SENSORS-BASEL, V19, DOI 10.3390/s19112522; Pathak D, 2016, PROC CVPR IEEE, P2536, DOI 10.1109/CVPR.2016.278; Romero D, 2017, IEEE T SIGNAL PROCES, V65, P2547, DOI 10.1109/TSP.2017.2666775; Talvitie J, 2015, IEEE T VEH TECHNOL, V64, P1340, DOI 10.1109/TVT.2015.2397598; Tang MY, 2016, IEEE ACCESS, V4, P8044, DOI 10.1109/ACCESS.2016.2627243; Xie HX, 2016, IEEE J SEL AREA COMM, V34, P2537, DOI 10.1109/JSAC.2016.2605238; Zhao Q, 2007, IEEE SIGNAL PROC MAG, V24, P79, DOI 10.1109/MSP.2007.361604; Zhou XW, 2018, CHINA COMMUN, V15, P16; Zhou Yu, 2018, Journal of Zhejiang University (Engineering Science), V52, P1088, DOI 10.3785/j.issn.1008-973X.2018.06.007; Zhu H., 2016, SIGNAL PROCESSING IE, V59, P2002, DOI DOI 10.1109/TSP.2011.2109956 27 8 8 0 1 MDPI BASEL ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND 1424-8220 SENSORS-BASEL Sensors JAN 2020 20 1 311 10.3390/s20010311 19 Chemistry, Analytical; Engineering, Electrical & Electronic; Instruments & Instrumentation Chemistry; Engineering; Instruments & Instrumentation KH2QT WOS:000510493100311 31935903 Green Published, gold 2021-09-15 J Fiore, U; De Santis, A; Perla, F; Zanetti, P; Palmieri, F Fiore, Ugo; De Santis, Alfredo; Perla, Francesca; Zanetti, Paolo; Palmieri, Francesco Using generative adversarial networks for improving classification effectiveness in credit card fraud detection INFORMATION SCIENCES English Article Fraud detection; Supervised classification; Deep learning; Generative adversarial networks SUPPORT VECTOR MACHINES; SMOTE In the last years, the number of frauds in credit card-based online payments has grown dramatically, pushing banks and e-commerce organizations to implement automatic fraud detection systems, performing data mining on huge transaction logs. Machine learning seems to be one of the most promising solutions for spotting illicit transactions, by distinguishing fraudulent and non-fraudulent instances through the use of supervised binary classification systems properly trained from pre-screened sample datasets. However, in such a specific application domain, datasets available for training are strongly imbalanced, with the class of interest considerably less represented than the other. This significantly reduces the effectiveness of binary classifiers, undesirably biasing the results toward the prevailing class, while we are interested in the minority class. Oversampling the minority class has been adopted to alleviate this problem, but this method still has some drawbacks. Generative Adversarial Networks are general, flexible, and powerful generative deep learning models that have achieved success in producing convincingly real-looking images. We trained a GAN to output mimicked minority class examples, which were then merged with training data into an augmented training set so that the effectiveness of a classifier can be improved. Experiments show that a classifier trained on the augmented set outperforms the same classifier trained on the original data, especially as far the sensitivity is concerned, resulting in an effective fraud detection mechanism. (C) 2017 Elsevier Inc. All rights reserved. [Fiore, Ugo; Perla, Francesca; Zanetti, Paolo] Parthenope Univ, Dept Management Studies Quantitat Mathods, Naples, Italy; [De Santis, Alfredo; Palmieri, Francesco] Univ Salerno, Dept Informat, Fisciano, Italy Fiore, U (corresponding author), Parthenope Univ, Dept Management Studies Quantitat Mathods, Naples, Italy. ufiore@unina.it Fiore, Ugo/D-4174-2009 Fiore, Ugo/0000-0003-0509-5662; Perla, Francesca/0000-0002-4671-3917; ZANETTI, Paolo/0000-0002-5915-2389 Akbani R, 2004, LECT NOTES COMPUT SC, V3201, P39, DOI 10.1007/978-3-540-30115-8_7; Arjovsky M., 2017, P 5 INT C LEARN REPR; Becker BG, 1997, IEEE COMPUT GRAPH, V17, P75, DOI 10.1109/38.595278; Bengio Y, 2013, INT CONF ACOUST SPEE, P8624, DOI 10.1109/ICASSP.2013.6639349; Bengio Y, 2013, IEEE T PATTERN ANAL, V35, P1798, DOI 10.1109/TPAMI.2013.50; Bhattacharyya S, 2011, DECIS SUPPORT SYST, V50, P602, DOI 10.1016/j.dss.2010.08.008; Brachman RJ, 1996, COMMUN ACM, V39, P42, DOI 10.1145/240455.240468; Bunkhumpornpat C, 2012, APPL INTELL, V36, P664, DOI 10.1007/s10489-011-0287-y; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Dal Pozzolo A, 2015, 2015 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), P159, DOI 10.1109/SSCI.2015.33; Davenport MA, 2007, 2007 IEEE/SP 14TH WORKSHOP ON STATISTICAL SIGNAL PROCESSING, VOLS 1 AND 2, P630, DOI 10.1109/SSP.2007.4301335; Davenport MA, 2010, IEEE T PATTERN ANAL, V32, P1888, DOI 10.1109/TPAMI.2010.29; Elkan C., 2001, P JOINT C ART INT, V17, P973, DOI DOI 10.5555/1642194.1642224; Galar M, 2012, IEEE T SYST MAN CY C, V42, P463, DOI 10.1109/TSMCC.2011.2161285; Ghosh S., 1994, Proceedings of the Twenty-Seventh Hawaii International Conference on System Sciences. Vol.III: Information Systems: Decision Support and Knowledge-Based Systems (Cat. No.94TH0607-2), P621, DOI 10.1109/HICSS.1994.323314; Glorot X., P 14 INT C ART INT S, V15, P315; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Goodfellow Ian J., 2013, ARXIV PREPRINT ARXIV; Han H, 2005, LECT NOTES COMPUT SC, V3644, P878, DOI 10.1007/11538059_91; He HB, 2009, IEEE T KNOWL DATA EN, V21, P1263, DOI 10.1109/TKDE.2008.239; Hinton Geoffrey E, 2012, NEURAL NETWORKS TRIC, P599, DOI [DOI 10.1007/978-3-642-35289-8_, DOI 10.1007/978-3-642-35289-8_32]; IVAKHNENKO AG, 1971, IEEE T SYST MAN CYB, VSMC1, P364, DOI 10.1109/TSMC.1971.4308320; Japkowicz N., 2002, Intelligent Data Analysis, V6, P429; Jensen D., 1997, AAAI WORKSH AI APPR, P34; Kim HC, 2003, PATTERN RECOGN, V36, P2757, DOI 10.1016/S0031-3203(03)00175-4; Kuncheva L.I., 2014, COMBINING PATTERN CL, V2nd; LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539; Lopez V, 2013, INFORM SCIENCES, V250, P113, DOI 10.1016/j.ins.2013.07.007; Manderick B, 2002, P 1 INT NAIS C NEUR, P261; Masnadi-Shirazi H, 2010, P 27 INT C MACH LEAR, P759; Rosset S., 1999, P 5 ACM SIGKDD INT C, P409, DOI [10.1145/312129.312303, DOI 10.1145/312129.312303]; Scott C, 2005, IEEE T INFORM THEORY, V51, P3806, DOI 10.1109/TIT.2005.856955; Shao H, 2002, 2002 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-4, PROCEEDINGS, P1241, DOI 10.1109/ICMLC.2002.1167400; Sherman E., 2002, NEWSWEEK, V139, p32B; Syeda M, 2002, PROCEEDINGS OF THE 2002 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOL 1 & 2, P572, DOI 10.1109/FUZZ.2002.1005055; Viaene S, 2004, IEEE T KNOWL DATA EN, V16, P612, DOI 10.1109/TKDE.2004.1277822 36 69 76 12 110 ELSEVIER SCIENCE INC NEW YORK STE 800, 230 PARK AVE, NEW YORK, NY 10169 USA 0020-0255 1872-6291 INFORM SCIENCES Inf. Sci. APR 2019 479 448 455 10.1016/j.ins.2017.12.030 8 Computer Science, Information Systems Computer Science HK8HE WOS:000458229200028 2021-09-15 J Dai, XJ; Lei, Y; Liu, YZ; Wang, TH; Ren, L; Curran, WJ; Patel, P; Liu, T; Yang, XF Dai, Xianjin; Lei, Yang; Liu, Yingzi; Wang, Tonghe; Ren, Lei; Curran, Walter J.; Patel, Pretesh; Liu, Tian; Yang, Xiaofeng Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network PHYSICS IN MEDICINE AND BIOLOGY English Article magnetic resonance imaging (MRI); bias field; intensity non-uniformity; deep learning; generative adversarial network (GAN) BIAS FIELD ESTIMATION; RETROSPECTIVE CORRECTION; FAT-SUPPRESSION; ABDOMINAL MRI; INHOMOGENEITY; SEGMENTATION; DENSITY; TUMORS; N3 Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 +/- 0.002, a PSNR of 28.0 +/- 1.9 dB, an NCC of 0.970 +/- 0.017, and a SNU of 0.298 +/- 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters. [Dai, Xianjin; Lei, Yang; Liu, Yingzi; Wang, Tonghe; Curran, Walter J.; Patel, Pretesh; Liu, Tian; Yang, Xiaofeng] Emory Univ, Dept Radiat Oncol, Atlanta, GA 30322 USA; [Dai, Xianjin; Lei, Yang; Liu, Yingzi; Wang, Tonghe; Curran, Walter J.; Patel, Pretesh; Liu, Tian; Yang, Xiaofeng] Emory Univ, Winship Canc Inst, Atlanta, GA 30322 USA; [Ren, Lei] Duke Univ, Dept Radiat Oncol, Durham, NC 27708 USA Yang, XF (corresponding author), Emory Univ, Dept Radiat Oncol, Atlanta, GA 30322 USA.; Yang, XF (corresponding author), Emory Univ, Winship Canc Inst, Atlanta, GA 30322 USA. xiaofeng.yang@emory.edu Lei, Yang/AAE-5089-2019 Lei, Yang/0000-0002-3572-0345 National Cancer Institute of the National Institutes of HealthUnited States Department of Health & Human ServicesNational Institutes of Health (NIH) - USANIH National Cancer Institute (NCI) [R01-CA215718]; National Institute of Biomedical Imaging and Bioengineering of the National Institutes of HealthUnited States Department of Health & Human ServicesNational Institutes of Health (NIH) - USANIH National Institute of Biomedical Imaging & Bioengineering (NIBIB) [R01-EB028324]; Department of Defense (DoD) Prostate Cancer Research Program (PCRP) [W81XWH-17-1-0438, W81XWH-19-1-0567]; Dunwoody Golf Club Prostate Cancer Research Award; Winship Cancer Institute of Emory University This research is supported in part by the National Cancer Institute of the National Institutes of Health under Grant No. R01-CA215718 and the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under grant no. R01-EB028324, the Department of Defense (DoD) Prostate Cancer Research Program (PCRP) Grant Nos. W81XWH-17-1-0438 and W81XWH-19-1-0567, Dunwoody Golf Club Prostate Cancer Research Award, and a philanthropic award provided by the Winship Cancer Institute of Emory University. Abadi M., 2016, ARXIV PREPRINT ARXIV; Agliozzo S, 2012, MED PHYS, V39, P1704, DOI 10.1118/1.3691178; Ahmed MN, 2002, IEEE T MED IMAGING, V21, P193, DOI 10.1109/42.996338; Anand Kumar G., 2019, Microelectronics, Electromagnetics and Telecommunications. Proceedings of the Fourth ICMEET 2018. Lecture Notes in Electrical Engineering (LNEE 521), P703, DOI 10.1007/978-981-13-1906-8_71; AXEL L, 1987, AM J ROENTGENOL, V148, P418, DOI 10.2214/ajr.148.2.418; Barker GJ, 1998, BRIT J RADIOL, V71, P59, DOI 10.1259/bjr.71.841.9534700; Beavis AW, 1998, BRIT J RADIOL, V71, P544, DOI 10.1259/bjr.71.845.9691900; Beddy P, 2011, RADIOLOGY, V258, P583, DOI 10.1148/radiol.10100912; Belaroussi B, 2006, MED IMAGE ANAL, V10, P234, DOI 10.1016/j.media.2005.09.004; Blessy SAPS, 2020, COMP M BIO BIO E-IV, V8, P40, DOI 10.1080/21681163.2018.1562994; Brandao S, 2013, CLIN RADIOL, V68, pE617, DOI 10.1016/j.crad.2013.06.004; Briechle K, 2001, PROC SPIE, V4387, P95, DOI 10.1117/12.421129; Dai XJ, 2020, MED PHYS, V47, P4115, DOI 10.1002/mp.14307; Deichmann R, 2002, MAGNET RESON MED, V47, P398, DOI 10.1002/mrm.10050; Delfaut EM, 1999, RADIOGRAPHICS, V19, P373, DOI 10.1148/radiographics.19.2.g99mr03373; Dong X, 2019, RADIOTHER ONCOL, V141, P192, DOI 10.1016/j.radonc.2019.09.028; Dowling JA, 2012, INT J RADIAT ONCOL, V83, pE5, DOI 10.1016/j.ijrobp.2011.11.056; Fedorov A, 2012, MAGN RESON IMAGING, V30, P1323, DOI 10.1016/j.mri.2012.05.001; Ganzetti M, 2016, FRONT NEUROINFORM, V10, DOI 10.3389/fninf.2016.00010; Ganzetti M, 2016, NEUROINFORMATICS, V14, P5, DOI 10.1007/s12021-015-9277-2; Giannini V, 2013, MULTIMODALITY BREAST, DOI [10.1117/3.1000499.ch4, DOI 10.1117/3.1000499.CH4]; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Haimerl M, 2018, SCI REP-UK, V8, DOI 10.1038/s41598-018-32207-6; Harms J, 2019, MED PHYS, V46, P3998, DOI 10.1002/mp.13656; HASELGROVE J, 1986, Magnetic Resonance Imaging, V4, P469, DOI 10.1016/0730-725X(86)90024-X; He K., 2016, EUR C COMP VIS, P630, DOI DOI 10.1007/978-3-319-46493-0_38; Heinrich Mattias P., 2018, Current Directions in Biomedical Engineering, V4, P297, DOI 10.1515/cdbme-2018-0072; Hou ZJ, 2006, INT J BIOMED IMAGING, V2006, DOI 10.1155/IJBI/2006/49515; Kaiming He, 2016, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), P770, DOI 10.1109/CVPR.2016.90; Kikinis R., 2014, INTRAOPERATIVE IMAGI, V3, P277, DOI DOI 10.1007/978-1-4614-7657-3_19; Lei Y, 2019, PHYS MED BIOL, V64, DOI 10.1088/1361-6560/ab4891; Lei Y, 2019, MED PHYS, V46, P3565, DOI 10.1002/mp.13617; Li CM, 2014, MAGN RESON IMAGING, V32, P913, DOI 10.1016/j.mri.2014.03.010; Li CM, 2011, IEEE T IMAGE PROCESS, V20, P2007, DOI 10.1109/TIP.2011.2146190; Li XH, 2014, J MAGN RESON IMAGING, V40, P58, DOI 10.1002/jmri.24329; Liang Z-P, 2000, PRINCIPLES MAGNETIC, DOI [10.1118/1.3519869, DOI 10.1118/1.3519869]; Likar B, 2001, IEEE T MED IMAGING, V20, P1398, DOI 10.1109/42.974934; Lin MQ, 2011, MED PHYS, V38, P5, DOI 10.1118/1.3519869; Liu H, 2018, SIGNAL IMAGE VIDEO P, V12, P791, DOI 10.1007/s11760-017-1221-5; Low RN, 2007, LANCET ONCOL, V8, P525, DOI 10.1016/S1470-2045(07)70170-5; MCVEIGH ER, 1986, MED PHYS, V13, P806, DOI 10.1118/1.595967; MEYER CR, 1995, IEEE T MED IMAGING, V14, P36, DOI 10.1109/42.370400; Murakami JW, 1996, MAGNET RESON MED, V35, P585, DOI 10.1002/mrm.1910350419; Nie Dong, 2017, Med Image Comput Comput Assist Interv, V10435, P417, DOI 10.1007/978-3-319-66179-7_48; OGAWA S, 1990, P NATL ACAD SCI USA, V87, P9868, DOI 10.1073/pnas.87.24.9868; Pieper S, 2004, 2004 2ND IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING: MACRO TO NANO, VOLS 1 and 2, P632; Plewes DB, 2012, J MAGN RESON IMAGING, V35, P1038, DOI 10.1002/jmri.23642; Reeder SB, 2010, MAGN RESON IMAGING C, V18, P337, DOI 10.1016/j.mric.2010.08.013; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Schmidt MA, 2015, PHYS MED BIOL, V60, pR323, DOI 10.1088/0031-9155/60/22/R323; Simko A., 2019, GEN NETWORK MRI INTE; Subudhi BN, 2019, IEEE J TRANSL ENG HE, V7, DOI 10.1109/JTEHM.2019.2898870; Tamada D, 2020, ARXIV200212889; Tustison NJ, 2010, IEEE T MED IMAGING, V29, P1310, DOI 10.1109/TMI.2010.2046908; Venkatesh V, 2020, COMPUT MED IMAG GRAP, V84, DOI 10.1016/j.compmedimag.2020.101748; Vignati A, 2015, PHYS MED BIOL, V60, P2685, DOI 10.1088/0031-9155/60/7/2685; Vignati A, 2011, J MAGN RESON IMAGING, V34, P1341, DOI 10.1002/jmri.22680; Vovk U, 2007, IEEE T MED IMAGING, V26, P405, DOI 10.1109/TMI.2006.891486; Wan FK, 2019, PROC SPIE, V10949, DOI 10.1117/12.2512950; Wang TH, 2021, PRO BIOMED OPT IMAG, V11317, DOI 10.1117/12.2548152; Yoo JC, 2009, CIRC SYST SIGNAL PR, V28, P819, DOI [10.1007/s00034-009-9130-7, 10.1007/S00034-009-9130-7]; Young S. W., 1987, MAGNETIC RESONANCE I; Zhu JY, 2017, IEEE I CONF COMP VIS, P2242, DOI 10.1109/ICCV.2017.244 63 9 9 5 9 IOP PUBLISHING LTD BRISTOL TEMPLE CIRCUS, TEMPLE WAY, BRISTOL BS1 6BE, ENGLAND 0031-9155 1361-6560 PHYS MED BIOL Phys. Med. Biol. NOV 7 2020 65 21 215025 10.1088/1361-6560/abb31f 12 Engineering, Biomedical; Radiology, Nuclear Medicine & Medical Imaging Engineering; Radiology, Nuclear Medicine & Medical Imaging OW1RE WOS:000592672300001 33245059 Green Submitted, Green Accepted 2021-09-15 J Nguyen, DT; Pham, TD; Batchuluun, G; Noh, KJ; Park, KR Dat Tien Nguyen; Tuyen Danh Pham; Batchuluun, Ganbayar; Noh, Kyoung Jun; Park, Kang Ryoung Presentation Attack Face Image Generation Based on a Deep Generative Adversarial Network SENSORS English Article generative adversarial network; presentation attack detection; artificial image generation; presentation attack face images NEURAL-NETWORKS Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training. [Dat Tien Nguyen; Tuyen Danh Pham; Batchuluun, Ganbayar; Noh, Kyoung Jun; Park, Kang Ryoung] Dongguk Univ, Div Elect & Elect Engn, 30 Pildong Ro 1 Gil, Seoul 04620, South Korea Batchuluun, G (corresponding author), Dongguk Univ, Div Elect & Elect Engn, 30 Pildong Ro 1 Gil, Seoul 04620, South Korea. nguyentiendat@dongguk.edu; phamdanhtuyen@gmail.com; ganabata87@dongguk.edu; kjn0908@naver.com; parkgr@dongguk.edu Batchuluun, Ganbayar/AAT-6377-2020 Batchuluun, Ganbayar/0000-0003-1456-5697 National Research Foundation of Korea (NRF) - Korean Government, Ministry of Science and ICT (MSIT) [NRF-2017R1C1B5074062]; NRF - MSIT through the Basic Science Research Program [NRF-2020R1A2C1006179]; NRF - MSIT, through the Bio and Medical Technology Development Program [NRF-2016M3A9E1915855] This work was supported in part by the National Research Foundation of Korea (NRF) funded by the Korean Government, Ministry of Science and ICT (MSIT), under Grant NRF-2017R1C1B5074062, in part by the NRF funded by the MSIT through the Basic Science Research Program under Grant NRF-2020R1A2C1006179, and in part by the NRF funded by the MSIT, through the Bio and Medical Technology Development Program under Grant NRF-2016M3A9E1915855. Benlamoudi A, 2017, P CGE10SPOOFING ALG; Bontrager P, DEEPMASTERPRINT FING; Borji A, 2019, COMPUT VIS IMAGE UND, V179, P41, DOI 10.1016/j.cviu.2018.10.009; Boulkenafet Z, 2015, IEEE IMAGE PROC, P2636, DOI 10.1109/ICIP.2015.7351280; Chen J, FSRNET END END LEARN; Chu PM, 2019, IEEE ACCESS, V7, P1021, DOI 10.1109/ACCESS.2018.2886213; Costa-Pazo A, 2016, P INT C BIOM SPEC IN, P1, DOI DOI 10.1109/BIOSIG.2016.7736936; Nguyen DT, 2019, SENSORS-BASEL, V19, DOI 10.3390/s19020410; Nguyen DT, 2018, SENSORS-BASEL, V18, DOI 10.3390/s18030699; Nguyen DT, 2017, SENSORS-BASEL, V17, DOI 10.3390/s17030637; de Souza GB, 2017, IEEE T CIRCUITS-II, V64, P1397, DOI 10.1109/TCSII.2017.2764460; Goodfellow I, GENERATIVE ADVERSARI; He K., DEEP RESIDUAL LEARNI; Heusel M, GANS TRAINED 2 TIME; Huang G., DENSELY CONNECTED CO; ISO Standard, 3010732017 ISOIEC; Isola P, IMAGE IMAGE TRANSLAT; Jain AK, 2004, IEEE T CIRC SYST VID, V14, P4, DOI 10.1109/TCSVT.2003.818349; Kazemi V, 2014, PROC CVPR IEEE, P1867, DOI 10.1109/CVPR.2014.241; Kim S, 2015, SENSORS-BASEL, V15, P1537, DOI 10.3390/s150101537; Krizhevsky A, 2017, COMMUN ACM, V60, P84, DOI 10.1145/3065386; Kupyn O., DEBLURGAN BLIND MOTI; Lecun Y, 1998, P IEEE, V86, P2278, DOI 10.1109/5.726791; Ledig Christian, PHOTO REALISTIC SING; Lee WO, 2014, SENSORS-BASEL, V14, P21726, DOI 10.3390/s141121726; Liu Y, DEEP TREE LEARNING Z; Lucic M, ARE GANS CREATED EQU; Maatta J, 2011, P IEEE INT JOINT C B, P1; Mao X., LEAST SQUARES GENERA; Menotti D, 2015, IEEE T INF FOREN SEC, V10, P864, DOI 10.1109/TIFS.2015.2398817; Minaee S, FINGER GAN GENERATIN; Minaee S, IRIS GAN LEARNING GE; Muhammad K, 2018, IEEE ACCESS, V6, P18174, DOI 10.1109/ACCESS.2018.2812835; Nguyen D.T., 2012, ADV SCI LETT, V5, P85, DOI DOI 10.1166/asl.2012.2177; Ojala T, 2002, IEEE T PATTERN ANAL, V24, P971, DOI 10.1109/TPAMI.2002.1017623; Pan J, SALGAN VISUAL SALIEN; Parveen S, 2016, COMPUTERS, V5, DOI 10.3390/computers5020010; Perarnau G, INVERTIBLE CONDITION; Simonyan K., VERY DEEP CONVOLUTIO; Srivastava N, 2014, J MACH LEARN RES, V15, P1929; Szegedy C, GOING DEEPER CONVOLU; Taigman Y, 2014, PROC CVPR IEEE, P1701, DOI 10.1109/CVPR.2014.220; Tan DS, 2019, SENSORS-BASEL, V19, DOI 10.3390/s19071587; Wang GY, 2018, PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), DOI 10.1145/3173574.3174143; Yi Z., DUALGAN UNSUPERVISED; Zhang H, SELF ATTENTION GENER; Zhang H, IMAGE RAINING USING; Zhao JJ, 2018, IEEE T CIRC SYST VID, V28, P2679, DOI 10.1109/TCSVT.2017.2710120; Zhiwei Zhang, 2012, 2012 5th IAPR International Conference on Biometrics (ICB), P26, DOI 10.1109/ICB.2012.6199754; Zhu J.-Y., UNPAIRED IMAGE IMAGE 50 1 1 0 1 MDPI BASEL ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND 1424-8220 SENSORS-BASEL Sensors APR 2020 20 7 1810 10.3390/s20071810 24 Chemistry, Analytical; Engineering, Electrical & Electronic; Instruments & Instrumentation Chemistry; Engineering; Instruments & Instrumentation LT5KW WOS:000537110500006 32218126 Green Published, gold 2021-09-15 J Kim, JH; Ryu, S; Jeong, J; So, D; Ban, HJ; Hong, SW Kim, Ji-Hye; Ryu, Sumin; Jeong, Jaehoon; So, Damwon; Ban, Hyun-Ju; Hong, Sungwook Impact of Satellite Sounding Data on Virtual Visible Imagery Generation Using Conditional Generative Adversarial Network IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING English Article Cloud computing; Satellites; Meteorology; Gallium nitride; Ocean temperature; Generative adversarial networks; Sea surface; Clouds; conditional generative adversarial network (CGAN); deep learning; multiband; nighttime; typhoon; visible (VIS) SPLIT-WINDOW; CLASSIFICATION The visible band of satellite sensors is of limited use during the night due to a lack of solar reflection. This study presents an improved conditional generative adversarial networks (CGANs) model to generate virtual nighttime visible imagery using infrared (IR) multiband satellite observations and the brightness temperature difference between the two IR bands in the communication, ocean, and meteorological satellite. For the summer daytime case study with visible band imagery, our multiband CGAN model showed better statistical results [correlation coefficient (CC) = 0.952, bias = -1.752 (in a digital number (DN) unit from 0 to 255, converted from reflectance from 0 to 1), and root-mean-square-error (RMSE) = 26.851 DN] than the single-band CGAN model using a pair of visible and IR bands (CC = 0.916, bias = -4.073 DN, and RMSE = 35.349 DN). The proposed multiband CGAN model performed better than the single-band CGAN model, particularly, in convective clouds and typhoons, because of the sounding effects from the water vapor band. In addition, our multiband CGAN model provided detailed patterns for clouds and typhoons at twilight. Therefore, our results could be used for visible-based nighttime weather analysis of convective clouds and typhoons, using data from next-generation geostationary meteorological satellites. [Kim, Ji-Hye; Ryu, Sumin; So, Damwon; Ban, Hyun-Ju; Hong, Sungwook] Sejong Univ, Dept Environm Energy & Geoinfomat, Seoul 100011, South Korea; [Hong, Sungwook] DeepThoTh Co Ltd, Dept Res & Dev, Seoul 05006, South Korea; [Jeong, Jaehoon] Natl Inst Environm Res, Incheon 400011, South Korea Hong, SW (corresponding author), Sejong Univ, Dept Environm Energy & Geoinfomat, Seoul 100011, South Korea. jai.kim410@sejong.ac.kr; ryusm26@sju.ac.kr; jaehoon80@korea.kr; dws328@sejong.ac.kr; hjban@sju.ac.kr; sesttiya@deep-thoth.org Korea Meteorological Administration Research and Development Program [KMI2020-00510]; National Institute of Environment Research (NIER) - Ministry of Environment (MOE) of the Republic of Korea [NIER-2020-01-01-004] This work was supported in part by the Korea Meteorological Administration Research and Development Program under Grant KMI2020-00510 and in part by a grant from the National Institute of Environment Research (NIER), funded by the Ministry of Environment (MOE) of the Republic of Korea (NIER-2020-01-01-004). Acharya UR, 2003, PATTERN RECOGN, V36, P61, DOI 10.1016/S0031-3203(02)00063-8; Arel I, 2010, IEEE COMPUT INTELL M, V5, P13, DOI 10.1109/MCI.2010.938364; Bengio Y, 2009, FOUND TRENDS MACH LE, V2, P1, DOI 10.1561/2200000006; Chi J, 2019, REMOTE SENS ENVIRON, V231, DOI 10.1016/j.rse.2019.05.023; Denton E, 2015, ARXIV150605751; Goodfellow I., 2014, ARXIV14062661; Hernandez E, 2016, LECT NOTES ARTIF INT, V9648, P151, DOI 10.1007/978-3-319-32034-2_13; Hopkins E., WEATHER SATELLITE IM; INOUE T, 1987, J GEOPHYS RES-ATMOS, V92, P3991, DOI 10.1029/JD092iD04p03991; Isola P, 2016, CORR; Kim J., 2017, ARXIV170305192; Kim K., 2019, REMOTE SENS, V11; Kim M., 1991, ASIA PAC J ATMOS SCI, V27, P353; Kim SH, 2019, ASIA-PAC J ATMOS SCI, V55, P337, DOI 10.1007/s13143-018-0093-0; Kim Y., 2019, REMOTE SENS, V11; Kiran B., 2018, ARXIV180103149; Lee JR, 2011, ASIA-PAC J ATMOS SCI, V47, P113, DOI 10.1007/s13143-011-0002-2; Li C, 2016, LECT NOTES COMPUT SC, V9907, P702, DOI 10.1007/978-3-319-46487-9_43; Liang M, 2015, PROC CVPR IEEE, P3367, DOI 10.1109/CVPR.2015.7298958; Lutz HJ, 2003, J METEOROL SOC JPN, V81, P623, DOI 10.2151/jmsj.81.623; Mao X, 2017, ARXIV161104076; Michelsanti D., 2017, ARXIV170901703; Mirza M., 2014, ARXIV14111784; Pathak D, 2016, PROC CVPR IEEE, P2536, DOI 10.1109/CVPR.2016.278; Radford A, 2016, ARXIV151106434; Razavian A- S., 2014, CORR, V1403, P6382; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Santos C.N.D., 2017, ARXIV170702198; Schroeder W, 2016, REMOTE SENS ENVIRON, V185, P210, DOI 10.1016/j.rse.2015.08.032; Shlens, 2017, ARXIV161009585; Tan J, 2019, SENSORS-BASEL, V19, DOI 10.3390/s19030643; Wang X., 2016, ARXIV160305631; Y.-C. Lin, PIX2PIX TENSORFLOW; Zhang R., 2016, ARXIV160308511; Zhu J.-Y., 2018, ARXIV170310593 35 4 4 3 4 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 1939-1404 2151-1535 IEEE J-STARS IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2020 13 4532 4541 10.1109/JSTARS.2020.3013598 10 Engineering, Electrical & Electronic; Geography, Physical; Remote Sensing; Imaging Science & Photographic Technology Engineering; Physical Geography; Remote Sensing; Imaging Science & Photographic Technology NG7TR WOS:000564184200002 gold 2021-09-15 J Ali-Gombe, A; Elyan, E Ali-Gombe, Adamu; Elyan, Eyad MFC-GAN: Class-imbalanced dataset classification using Multiple Fake Class Generative Adversarial Network NEUROCOMPUTING English Article Image classification; Imbalanced data; Deep learning Class-imbalanced datasets are common across different domains such as health, banking, security and others. With such datasets, the learning algorithms are often biased toward the majority class-instances. Data augmentation is a common approach that aims at rebalancing a dataset by injecting more data samples of the minority class instances. In this paper, a new data augmentation approach is proposed using a Generative Adversarial Networks (GAN) to handle the class imbalance problem. Unlike common GAN models, which use a single fake class, the proposed method uses multiple fake classes to ensure a fine-grained generation and classification of the minority class instances. Moreover, the proposed GAN model is conditioned to generate minority class instances aiming at rebalancing the dataset. Extensive experiments were carried out using public datasets, where synthetic samples generated using our model were added to the imbalanced dataset, followed by performing classification using Convolutional Neural Network. Experiment results show that our model can generate diverse minority class instances, even in extreme cases where the number of minority class instances is relatively low. Additionally, superior performance of our model over other common augmentation and oversampling methods was achieved in terms of classification accuracy and quality of the generated samples. (C) 2019 Elsevier B.V. All rights reserved. [Ali-Gombe, Adamu; Elyan, Eyad] Robert Gordon Univ, Sch Comp Sci & Digital Media, Aberdeen, Scotland; [Elyan, Eyad] Robert Gordon Univ, Higher Educ Acad, Aberdeen, Scotland Ali-Gombe, A (corresponding author), Robert Gordon Univ, Sch Comp Sci & Digital Media, Aberdeen, Scotland. a.ali-gombe@rgu.ac.uk Ali-Gombe, Adamu/AAC-8805-2020 Ali-Gombe, Adamu/0000-0001-7152-5697; Elyan, Eyad/0000-0002-8342-9026 Adamu A.-G., 2018, P 2018 INT JOINT C N; Ali-Gombe A., 2017, P INT C ENG APPL NEU; Antoniou A., ARXIV171104340; Baur C., ARXIV180404338; Brox T., 2014, ADV NEURAL INFORM PR, P766, DOI DOI 10.1109/TPAMI.2015.2496141; Buda M., ARXIV171005381; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217; Denton E. L., 2015, ADV NEURAL INFORM PR, DOI DOI 10.5555/; Dong Q., 2017, ICCV; Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Fernandez A, 2013, KNOWL-BASED SYST, V42, P97, DOI 10.1016/j.knosys.2013.01.018; Frid-Adar M., ARXIV180102385; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gurumurthy S, 2017, P 9 INT C COMM SYST, P166; He K., 2016, PROC CVPR IEEE, P770, DOI DOI 10.1109/CVPR.2016.90; Huang C, 2016, PROC CVPR IEEE, P5375, DOI 10.1109/CVPR.2016.580; Inoue H., ARXIV180102929; Karras Tero, ICLR2018; Krawczyk B, 2016, PROG ARTIF INTELL, V5, P221, DOI 10.1007/s13748-016-0094-0; Krizhevsky A., 2012, ADV NEURAL INFORM PR, V25, P1097; LeCun Y., 1990, ADV NEURAL INFORM PR, DOI DOI 10.1111/DSU.12130; Mariani G., ARXIV180309655; Mirza M., 2014, ARXIV14111784; Miyato T., ICLR2018; Odena A., 2016, ARXIV160601583; Odena A, 2017, PR MACH LEARN RES, V70; Radford A., 2015, ARXIV PREPRINT ARXIV; Wan LP, 2018, INT CONF BIOMETR, P98, DOI 10.1109/ICB2018.2018.00025; Wang SJ, 2016, IEEE IJCNN, P4368, DOI 10.1109/IJCNN.2016.7727770; Wu B, 2011, NIPSW; Zeiler M.D., 2012, ARXIV12125701; Zhang H., ARXIV171009412; Zhu X., 2017, ARXIV171100648 34 24 25 6 54 ELSEVIER AMSTERDAM RADARWEG 29, 1043 NX AMSTERDAM, NETHERLANDS 0925-2312 1872-8286 NEUROCOMPUTING Neurocomputing OCT 7 2019 361 212 221 10.1016/j.neucom.2019.06.043 10 Computer Science, Artificial Intelligence Computer Science IQ0AV WOS:000480413200021 Green Accepted 2021-09-15 J Ezeme, OM; Mahmoud, QH; Azim, A Ezeme, Okwudili M.; Mahmoud, Qusay H.; Azim, Akramul Design and Development of AD-CGAN: Conditional Generative Adversarial Networks for Anomaly Detection IEEE ACCESS English Article Anomaly detection; Machine learning; Hidden Markov models; Generative adversarial networks; Gallium nitride; Data models; Context modeling; Anomaly detection; transfer learning; deep learning; generative adversarial networks NEURAL-NETWORK; FRAMEWORK Whether in the realm of software or hardware, datasets representing the state of systems are mostly imbalanced. This imbalance is because these systems' reliability requirements make the occurrence of an anomaly a rare phenomenon. Hence, most datasets on anomaly detection have a relatively small percentage that captures the anomaly. Recently, generative adversarial networks (GAN) have shown promising results in image generation tasks. Therefore, in this research work, we build on conditional GANs (CGAN) to generate plausible distributions of a given profile to solve the challenge of data imbalance in anomaly detection tasks and present a novel framework for anomaly detection. Firstly, we learn the pattern of the minority class data samples using a single class CGAN. Secondly, we use the knowledge base of the single class CGAN to generate samples that augment the minority class samples so that a binary class CGAN can train on the typical and malicious profiles with a balanced dataset. This approach inherently eliminates the bias imposed on algorithms from the dataset and results in a robust framework with improved generalization. Thirdly, the binary class CGAN generates a knowledge base that we use to construct the cluster-based anomaly detector. During testing, we do not use the single class CGAN, thereby providing us with a lean and efficient algorithm for anomaly detection that can do anomaly detection on semi-supervised and non-parametric multivariate data. We test the framework on logs and image-based anomaly detection datasets with class imbalance. We compare the performance of AD-CGAN with GAN-derived and non-GAN-derived state of the art algorithms on benchmark datasets. AD-CGAN outperforms most of the algorithms in the standard metrics of Precision, Recall, and F-1 Score. Where AD-CGAN does not perform better in the parameters used, it has the advantage of being lightweight. Therefore, it can be deployed for both online and offline anomaly detection tasks since it does not use an input sample inversion strategy. [Ezeme, Okwudili M.; Mahmoud, Qusay H.; Azim, Akramul] Ontario Tech Univ, Dept Elect Comp & Software Engn, Oshawa, ON L1G 0C5, Canada Ezeme, OM (corresponding author), Ontario Tech Univ, Dept Elect Comp & Software Engn, Oshawa, ON L1G 0C5, Canada. mellitus.ezeme@ontariotechu.net Ezeme, Okwudili/0000-0002-0957-0566 Natural Sciences and Engineering Research Council of Canada (NSERC)Natural Sciences and Engineering Research Council of Canada (NSERC) This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). Campos GO, 2016, DATA MIN KNOWL DISC, V30, P891, DOI 10.1007/s10618-015-0444-8; Chalapathy R., 2019, ARXIV190103407; Chandola V, 2009, ACM COMPUT SURV, V41, DOI 10.1145/1541880.1541882; Chen X, 2016, ADV NEUR IN, V29; Du M, 2017, CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, P1285, DOI 10.1145/3133956.3134015; Ezeme M., 2017, P 4 IEEE ACM INT C B, P43, DOI [10.1145/3148055.3148076, DOI 10.1145/3148055.3148076]; Ezeme MO, 2019, IEEE ACCESS, V7, P18860, DOI 10.1109/ACCESS.2019.2897122; Ezeme MO, 2018, 2018 IEEE 24TH INTERNATIONAL CONFERENCE ON EMBEDDED AND REAL-TIME COMPUTING SYSTEMS AND APPLICATIONS (RTCSA), P225, DOI 10.1109/RTCSA.2018.00035; Ezeme O. M., 2020, IEEE T KNOWL DATA EN, DOI [10.1109/TKDE.2020.2978469, DOI 10.1109/TKDE.2020.2978469]; Ezeme OM, 2021, IEEE T EMERG TOP COM, V9, P957, DOI 10.1109/TETC.2020.2971251; Ezeme OM, 2019, LECT NOTES ARTIF INT, V11489, P549, DOI 10.1007/978-3-030-18305-9_58; Goldstein M, 2016, PLOS ONE, V11, DOI 10.1371/journal.pone.0152173; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gu Yu, 2005, P 5 ACM SIGCOMM C IN, P32; Han T, 2020, ISA T, V97, P269, DOI 10.1016/j.isatra.2019.08.012; Han T, 2019, KNOWL-BASED SYST, V165, P474, DOI 10.1016/j.knosys.2018.12.019; Hawkins D. M, 1980, IDENTIFICATION OUTLI, V11; Hinton Geoffrey E, 2003, ADV NEURAL INFORM PR, V15, P857; Hofmeyr S. A., 1998, Journal of Computer Security, V6, P151; Kosoresow AP, 1997, IEEE SOFTWARE, V14, P35, DOI 10.1109/52.605929; Li D, 2019, ROUT INT HANDB, P706; Li F, 2017, IEEE T SOFTWARE ENG, V43, P760, DOI 10.1109/TSE.2016.2632122; Liu FT, 2012, ACM T KNOWL DISCOV D, V6, DOI 10.1145/2133360.2133363; Luo Y., 2020, ACM COMPUT SURV, P29; Man-Ki Yoon, 2017, 2017 IEEE/ACM Second International Conference on Internet-of-Things Design and Implementation (IoTDI), P191, DOI 10.1145/3054977.3054999; Manek G., 2018, ARXIV180206222 ARXIV180206222; Mirza M., 2014, ARXIV14111784; Mutz D., 2006, ACM Transactions on Information and Systems Security, V9, P61, DOI 10.1145/1127345.1127348; Salem M, 2016, PROC EUROMICR, P97, DOI 10.1109/ECRTS.2016.22; Schlegl T, 2017, LECT NOTES COMPUT SC, V10265, P146, DOI 10.1007/978-3-319-59050-9_12; Wang Z, 2019, ARXIV190601529; Warrender C, 1999, P IEEE S SECUR PRIV, P133, DOI 10.1109/SECPRI.1999.766910; Wattenberg M, 2016, DISTILL, DOI [10.23915/distill.00002, DOI 10.23915/DISTILL.00002]; Xiong YJ, 2019, IEEE ACCESS, V7, P147345, DOI 10.1109/ACCESS.2019.2936844; Xu W., 2009, P SOSP; Zhai S., 2016, ARXIV160507717; Zhang GP, 2003, NEUROCOMPUTING, V50, P159, DOI 10.1016/S0925-2312(01)00702-0; Zou H, 2006, J COMPUT GRAPH STAT, V15, P265, DOI 10.1198/106186006X113430 38 0 0 4 10 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2169-3536 IEEE ACCESS IEEE Access 2020 8 177667 177681 10.1109/ACCESS.2020.3025530 15 Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications Computer Science; Engineering; Telecommunications OA6JY WOS:000577890100001 gold 2021-09-15 J Bi, LN; Hu, GP Bi, Luning; Hu, Guiping Improving Image-Based Plant Disease Classification With Generative Adversarial Network Under Limited Training Set FRONTIERS IN PLANT SCIENCE English Article plant disease; classification; regularization; convolutional neural network; generative adversarial network Traditionally, plant disease recognition has mainly been done visually by human. It is often biased, time-consuming, and laborious. Machine learning methods based on plant leave images have been proposed to improve the disease recognition process. Convolutional neural networks (CNNs) have been adopted and proven to be very effective. Despite the good classification accuracy achieved by CNNs, the issue of limited training data remains. In most cases, the training dataset is often small due to significant effort in data collection and annotation. In this case, CNN methods tend to have the overfitting problem. In this paper, Wasserstein generative adversarial network with gradient penalty (WGAN-GP) is combined with label smoothing regularization (LSR) to improve the prediction accuracy and address the overfitting problem under limited training data. Experiments show that the proposed WGAN-GP enhanced classification method can improve the overall classification accuracy of plant diseases by 24.4% as compared to 20.2% using classic data augmentation and 22% using synthetic samples without LSR. [Bi, Luning; Hu, Guiping] Iowa State Univ, Dept Ind & Mfg Syst Engn, Ames, IA 50011 USA Hu, GP (corresponding author), Iowa State Univ, Dept Ind & Mfg Syst Engn, Ames, IA 50011 USA. gphu@iastate.edu Bi, Luning/P-5716-2019 Bi, Luning/0000-0002-3227-911X Plant Sciences Institute's Faculty Scholars program at Iowa State University This work is partially supported by the Plant Sciences Institute's Faculty Scholars program at Iowa State University. Arjovsky M., 2017, ARXIV170107875; Barbedo JGA, 2019, BIOSYST ENG, V180, P96, DOI 10.1016/j.biosystemseng.2019.02.002; Barbedo JGA, 2018, COMPUT ELECTRON AGR, V153, P46, DOI 10.1016/j.compag.2018.08.013; Barbedo JGA, 2018, BIOSYST ENG, V172, P84, DOI 10.1016/j.biosystemseng.2018.05.013; Camargo A, 2009, J ARTIFICIAL INTELLI, V66, P121; Chollet F., 2015, KERAS; DHAKATE M, 2015, NAT CONF COMPUT VIS, P1; Emersic Z, 2017, IEEE INT CONF AUTOMA, P987, DOI 10.1109/FG.2017.123; Ferentinos KP, 2018, COMPUT ELECTRON AGR, V145, P311, DOI 10.1016/j.compag.2018.01.009; Ghazi MM, 2017, NEUROCOMPUTING, V235, P228, DOI 10.1016/j.neucom.2017.01.018; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Grinblat GL, 2016, COMPUT ELECTRON AGR, V127, P418, DOI 10.1016/j.compag.2016.07.003; Gu JX, 2018, PATTERN RECOGN, V77, P354, DOI 10.1016/j.patcog.2017.10.013; Guo J., 2015, DEEP CNN ENSEMBLE DA; Hu GS, 2018, IEEE T IMAGE PROCESS, V27, P293, DOI 10.1109/TIP.2017.2756450; Hughes D., 2015, ARXIV; Gulrajani I, 2017, ADV NEUR IN, V30; Kamilaris A, 2018, COMPUT ELECTRON AGR, V147, P70, DOI 10.1016/j.compag.2018.02.016; Lu Y, 2017, NEUROCOMPUTING, V267, P378, DOI 10.1016/j.neucom.2017.06.023; Ma JC, 2018, COMPUT ELECTRON AGR, V154, P18, DOI 10.1016/j.compag.2018.08.048; Mirza M., 2014, ARXIV14111784; Mohanty SP, 2016, FRONT PLANT SCI, V7, DOI 10.3389/fpls.2016.01419; Naresh YG, 2016, NEUROCOMPUTING, V173, P1789, DOI 10.1016/j.neucom.2015.08.090; Nazki H, 2020, COMPUT ELECTRON AGR, V168, DOI 10.1016/j.compag.2019.105117; Papon J., 2015, P IEEE INT C COMP VI; Patil JK, 2011, J ADV BIOINFORM APPL, V2, P135; Pereyra G., 2017, REGULARIZING NEURAL; Radford A., 2015, ARXIV PREPRINT ARXIV; Sankaran S, 2010, COMPUT ELECTRON AGR, V72, P1, DOI 10.1016/j.compag.2010.02.007; Simonyan K., 2014, ARXIV PREPRINT; Sladojevic S, 2016, COMPUT INTEL NEUROSC, V2016, DOI 10.1155/2016/3289801; Strange RN, 2005, ANNU REV PHYTOPATHOL, V43, P83, DOI 10.1146/annurev.phyto.43.113004.133839; Szegedy Christian, 2016, P IEEE C COMP VIS PA; Xie L., 2016, P IEEE C COMP VIS PA; Zhang SW, 2016, NEUROCOMPUTING, V205, P341, DOI 10.1016/j.neucom.2016.04.034 35 1 1 2 9 FRONTIERS MEDIA SA LAUSANNE AVENUE DU TRIBUNAL FEDERAL 34, LAUSANNE, CH-1015, SWITZERLAND 1664-462X FRONT PLANT SCI Front. Plant Sci. DEC 4 2020 11 583438 10.3389/fpls.2020.583438 12 Plant Sciences Plant Sciences PG2ND WOS:000599576500001 33343595 Green Published, gold 2021-09-15 J Zhang, T; Zhu, K; Niyato, D Zhang, Tao; Zhu, Kun; Niyato, Dusit A Generative Adversarial Learning-Based Approach for Cell Outage Detection in Self-Organizing Cellular Networks IEEE WIRELESS COMMUNICATIONS LETTERS English Article Gallium nitride; Training; Generators; Data models; Machine learning algorithms; Cellular networks; Generative adversarial networks; Self-organizing network; cell outage detection; data imbalance; GAN; Adaboost For enabling automatic deployment and management of cellular networks, Self-Organizing Network (SON) was boosted to enhance network performance, to improve service quality, and to reduce operational and capital expenditure. Cell outage detection is an essential functionality of SON to autonomously detect cells that fail to provide services, due to either software or hardware faults. Machine learning represents an effective tool for such a task. However, traditional classification algorithms for cell outage detection are likely to construct a biased classifier when training samples in one class significantly outnumber other classes. To counter this problem, in this letter, we present a novel method that is able to learn from imbalanced cell outage data in cellular networks, through combining Generative Adversarial Network (GAN) and Adaboost. Specifically, the proposed approach utilizes GAN to change distribution of imbalanced dataset by synthesizing more samples for minority class, and then uses Adaboost to classify the calibrated dataset. Experimental results show significant improvement of classification performance for imbalanced cell outage data, on the basis of several metrics including Receiver Operating Characteristic (ROC), precision, recall rate, and F-value. [Zhang, Tao; Zhu, Kun] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China; [Zhang, Tao; Zhu, Kun] Collaborat Innovat Ctr Novel Software Technol & I, Nanjing 211106, Peoples R China; [Niyato, Dusit] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore Zhu, K (corresponding author), Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing 211106, Peoples R China. tao@nuaa.edu.cn; zhukun@nuaa.edu.cn; dniyato@ntu.edu.sg Niyato, Dusit/0000-0002-7442-7416 National Natural Science Foundation of ChinaNational Natural Science Foundation of China (NSFC) [61701230]; Natural Science Foundation of Jiangsu ProvinceNatural Science Foundation of Jiangsu Province [BK20170805]; Fundamental Research Funds for the Central UniversitiesFundamental Research Funds for the Central Universities [NE2018107] This work was supported in part by the National Natural Science Foundation of China under Grant 61701230, in part by the Natural Science Foundation of Jiangsu Province under Grant BK20170805, and in part by the Fundamental Research Funds for the Central Universities under Grant NE2018107. The associate editor coordinating the review of this article and approving it for publication was A. Liu. (Corresponding author: Kun Zhu.) Aceto G, 2018, J NETW COMPUT APPL, V103, P131, DOI 10.1016/j.jnca.2017.11.007; Aliu OG, 2013, IEEE COMMUN SURV TUT, V15, P336, DOI 10.1109/SURV.2012.021312.00116; Asghar A, 2018, IEEE COMMUN SURV TUT, V20, P1682, DOI 10.1109/COMST.2018.2825786; Chawla NV, 2003, LECT NOTES ARTIF INT, V2838, P107, DOI 10.1007/978-3-540-39804-2_12; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; He HB, 2009, IEEE T KNOWL DATA EN, V21, P1263, DOI 10.1109/TKDE.2008.239; Montieri A., IEEE T DEPEND SECURE; Mulvey David, 2018, 2018 International Conference on Information and Communication Technology Convergence (ICTC), P441, DOI 10.1109/ICTC.2018.8539449; Onireti O, 2016, IEEE T VEH TECHNOL, V65, P2097, DOI 10.1109/TVT.2015.2431371; Yu P, 2018, WIREL COMMUN MOB COM, DOI 10.1155/2018/6201386; Zhang DF, 2019, IEEE ACCESS, V7, P78817, DOI 10.1109/ACCESS.2019.2922693 11 5 5 0 2 IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC PISCATAWAY 445 HOES LANE, PISCATAWAY, NJ 08855-4141 USA 2162-2337 2162-2345 IEEE WIREL COMMUN LE IEEE Wirel. Commun. Lett. FEB 2020 9 2 171 174 10.1109/LWC.2019.2947041 4 Computer Science, Information Systems; Engineering, Electrical & Electronic; Telecommunications Computer Science; Engineering; Telecommunications KO8RZ WOS:000515817000010 2021-09-15 J Li, HF; Fan, R; Shi, QS; Du, ZJ Li, Huifang; Fan, Rui; Shi, Qisong; Du, Zijian Class Imbalanced Fault Diagnosis via Combining K-Means Clustering Algorithm with Generative Adversarial Networks JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS English Article class imbalance; fault diagnosis; machine learning; deep learning DEEP NEURAL-NETWORKS; INTELLIGENT DIAGNOSIS Recent advancements in machine learning and communication technologies have enabled new approaches to automated fault diagnosis and detection in industrial systems. Given wide variation in occurrence frequencies of different classes of faults, the class distribution of real-world industrial fault data is usually imbalanced. However, most prior machine learning-based classification methods do not take this imbalance into consideration, and thus tend to be biased toward recognizing the majority classes and result in poor accuracy for minority ones. To solve such problems, we propose a k-means clustering generative adversarial network (KM-GAN)-based fault diagnosis approach able to reduce imbalance in fault data and improve diagnostic accuracy for minority classes. First, we design a new k-means clustering algorithm and GAN-based oversampling method to generate diverse minority-class samples obeying the similar distribution to the original minority data. The k-means clustering algorithm is adopted to divide minority-class samples into k clusters, while a GAN is applied to learn the data distribution of the resulting clusters and generate a given number of minority-class samples as a supplement to the original dataset. Then, we construct a deep neural network (DNN) and deep belief network (DBN)-based heterogeneous ensemble model as a fault classifier to improve generalization, in which DNN and DBN models are trained separately on the resulting dataset, and then the outputs from both are averaged as the final diagnostic result. A series of comparative experiments are conducted to verify the effectiveness of our proposedmethod, and the experimental results show that our method can improve diagnostic accuracy for minority- class samples. [Li, Huifang; Fan, Rui; Shi, Qisong; Du, Zijian] Beijing Inst Technol, Sch Automat, 5 Zhongguancun South St, Beijing 100081, Peoples R China Li, HF (corresponding author), Beijing Inst Technol, Sch Automat, 5 Zhongguancun South St, Beijing 100081, Peoples R China. huifang@bit.edu.cn National Key Research and Development Program of China [2018YFB1003700]; National Natural Science Foundation of ChinaNational Natural Science Foundation of China (NSFC) [61836001] This work is supported in part by the National Key Research and Development Program of China (Grant No.2018YFB1003700), and in part by the National Natural Science Foundation of China (Grant No.61836001). Arjovsky M., 2017, ARXIV170107875, P214; Bae H., 2004, J ADV COMPUT INTELL, V8, P431; Barandela R, 2004, LECT NOTES COMPUT SC, V3138, P806; Batista GEAPA, 2004, ACM SIGKDD EXPLOR NE, V6, P20, DOI [10.1145/1007730.1007735], DOI 10.1145/1007730.1007735, 10.1145/1007730.1007735]; Chawla NV, 2002, J ARTIF INTELL RES, V16, P321, DOI 10.1613/jair.953; Anh DN, 2020, J ADV COMPUT INTELL, V24, P648; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Han H, 2020, PLOS ONE, V15, DOI 10.1371/journal.pone.0239070; Han S, 2019, INT J PRECIS ENG MAN, V20, P167, DOI 10.1007/s12541-019-00082-4; Han T, 2019, KNOWL-BASED SYST, V165, P474, DOI 10.1016/j.knosys.2018.12.019; Hartigan J. A., 1979, Applied Statistics, V28, P100, DOI 10.2307/2346830; He YY, 2019, COMPUT BIOL CHEM, V80, P121, DOI 10.1016/j.compbiolchem.2019.03.017; Hinton GE, 2006, NEURAL COMPUT, V18, P1527, DOI 10.1162/neco.2006.18.7.1527; Nguyen HB, 2020, J ADV COMPUT INTELL, V24, P48; Jia F, 2016, MECH SYST SIGNAL PR, V72-73, P303, DOI 10.1016/j.ymssp.2015.10.025; Lei YG, 2020, MECH SYST SIGNAL PR, V138, DOI 10.1016/j.ymssp.2019.106587; Li H., 2021, IEEE T AUTOMATION SC, DOI [10.1109/TASE.2020.3048056, DOI 10.1109/TASE.2020.3048056]; Liu H, 2018, NEUROCOMPUTING, V315, P412, DOI 10.1016/j.neucom.2018.07.034; ROSENBLATT F, 1958, PSYCHOL REV, V65, P386, DOI 10.1037/h0042519; RUMELHART DE, 1986, NATURE, V323, P533, DOI 10.1038/323533a0; Sun C, 2020, PROCEDIA MANUFACTURI, V49, P99, DOI [10.1016/j.promfg.2020.07.003, DOI 10.1016/J.PROMFG.2020.07.003]; Tang SN, 2020, IEEE ACCESS, V8, P9335, DOI 10.1109/ACCESS.2019.2963092; Wang ZR, 2018, NEUROCOMPUTING, V310, P213, DOI 10.1016/j.neucom.2018.05.024; Wei YQ, 2020, CAN J CHEM ENG, V98, P1293, DOI 10.1002/cjce.23750; Zhang J., 2003, INT C MACH LEARN WOR; Zhu TF, 2017, PATTERN RECOGN, V72, P327, DOI 10.1016/j.patcog.2017.07.024 26 0 0 1 1 FUJI TECHNOLOGY PRESS LTD TOKYO 1-15-7, UCHIKANDA, CHIYODA-KU, UNIZO UCHIKANDA 1-CHOME BLDG 2F, TOKYO, 101-0047, JAPAN 1343-0130 1883-8014 J ADV COMPUT INTELL J. Adv. Comput. Intell. Inform. MAY 2021 25 3 346 355 10.20965/jaciii.2021.p0346 10 Computer Science, Artificial Intelligence Computer Science SH7GD WOS:000654301300009 2021-09-15 C Rezaei, M; Uemura, T; Nappi, J; Yoshida, H; Lippert, C; Meinel, C Hahn, HK; Mazurowski, MA Rezaei, Mina; Uemura, Tomoki; Nappi, Janne; Yoshida, Hiroyuki; Lippert, Christoph; Meinel, Christoph Generative Synthetic Adversarial Network for Internal Bias Correction and Handling Class Imbalance Problem in Medical Image Diagnosis MEDICAL IMAGING 2020: COMPUTER-AIDED DIAGNOSIS Proceedings of SPIE English Proceedings Paper Conference on Medical Imaging - Computer-Aided Diagnosis FEB 16-19, 2020 Houston, TX SPIE Imbalanced Learning; Synthetic Medical Imaging; GANs; Multi-Class Classification Imbalanced training data introduce important challenge into medical image analysis where a majority of the data belongs to a normal class and only few samples belong to abnormal classes. We propose to mitigate the class imbalance problem by introducing two generative adversarial network (GAN) architectures for class minority oversampling. Here, we explore balancing data distribution 1) by generating new sample from unsupervised GAN or 2) synthesize missing image modalities from semi-supervised GAN. We evaluated the effect of the synthetic unsupervised and semi-supervised GAN methods by use of 1,500 MR images for brain disease diagnosis, where the classification performance of a residual network was compared between unbalanced datasets, classic data augmentation, and the proposed new GAN-based methods.The evaluation results showed that the synthesized minority samples generated by GAN improved classification accuracy up to 18% in term of Dice score. [Rezaei, Mina; Lippert, Christoph; Meinel, Christoph] Hasso Plattner Inst, Prof Dr Helmert St 2-3, Berlin, Germany; [Rezaei, Mina; Uemura, Tomoki; Nappi, Janne; Yoshida, Hiroyuki] Massachusetts Gen Hosp, Dept Radiol, 3D Imaging Res, 25 New Chardon St, Boston, MA 02114 USA; [Rezaei, Mina; Uemura, Tomoki; Nappi, Janne; Yoshida, Hiroyuki] Harvard Med Sch, 25 New Chardon St, Boston, MA 02115 USA Rezaei, M (corresponding author), Hasso Plattner Inst, Prof Dr Helmert St 2-3, Berlin, Germany.; Rezaei, M (corresponding author), Massachusetts Gen Hosp, Dept Radiol, 3D Imaging Res, 25 New Chardon St, Boston, MA 02114 USA.; Rezaei, M (corresponding author), Harvard Med Sch, 25 New Chardon St, Boston, MA 02115 USA. fmina.rezaei@hpi.de; tuemura@mgh.harvard.edu; janne.nappi@mgh.harvard.edu; yoshida.hirog@mgh.harvard.edu; christoph.lippert@hpi.de; christoph.meinelg@hpi.de Douzas G, 2018, EXPERT SYST APPL, V91, P464, DOI 10.1016/j.eswa.2017.09.030; Efros A.A., 2016, ABS161107004 CORR; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; HAN B, 2018, ADV NEUR IN, V31; Heusel M., 2017, ADV NEURAL INFORM PR, P6629; Jang JW, 2014, PROCEEDINGS OF THE 2014 9TH INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS (VISAPP), VOL 1, P15; Paul JS, 2017, PROC SPIE, V10137, DOI 10.1117/12.2254195; Rezaei M., 2018, 2018 INT JOINT C NEU, P1, DOI DOI 10.1109/IJCNN.2018.8489105; Rezaei M, 2019, PROC SPIE, V10950, DOI 10.1117/12.2512215; Rezaei M, 2019, IEEE WINT CONF APPL, P1836, DOI 10.1109/WACV.2019.00200; Rezaei M, 2017, LECT NOTES COMPUT SC, V10637, P798, DOI 10.1007/978-3-319-70093-9_85; Rezaiee-Pajand M, 2020, MECH ADV MATER STRUC, V27, P975, DOI 10.1080/15376494.2018.1503381; Simonyan K., 2014, 2 INT C LEARN REPR I 13 0 0 1 3 SPIE-INT SOC OPTICAL ENGINEERING BELLINGHAM 1000 20TH ST, PO BOX 10, BELLINGHAM, WA 98227-0010 USA 0277-786X 1996-756X 978-1-5106-3396-4 PROC SPIE 2020 11314 113140E 10.1117/12.2551166 8 Engineering, Biomedical; Optics; Radiology, Nuclear Medicine & Medical Imaging Engineering; Optics; Radiology, Nuclear Medicine & Medical Imaging BQ2WK WOS:000582673400012 2021-09-15 J Armanious, K; Kustner, T; Reimold, M; Nikolaou, K; La Fougere, C; Yang, B; Gatidis, S Armanious, Karim; Kuestner, Thomas; Reimold, Matthias; Nikolaou, Konstantin; La Fougere, Christian; Yang, Bin; Gatidis, Sergios Independent brain F-18-FDG PET attenuation correction using a deep learning approach with Generative Adversarial Networks HELLENIC JOURNAL OF NUCLEAR MEDICINE English Article Attenuation correction; Deep learning; Generative Adversarial Networks CT; QUANTIFICATION; DISEASE; SPET; SPM; MRI Objective: Attenuation correction (AC) of positron emission tomography (PET) data poses a challenge when no transmission data or computed tomography (CT) data are available, e.g. in stand alone PET scanners or PET/magnetic resonance imaging (MRI). In these cases, external imaging data or morphological imaging data are normally used for the generation of attenuation maps. Newly introduced machine learning methods however may allow for direct estimation of attenuation maps from non attenuation-corrected PET data (PETNAC). Our purpose was thus to establish and evaluate a method for independent AC of brain fluorine-18-fluorodeoxyglucose (F-18-FDG) PET images only based on PETNAC using Generative Adversarial Networks (GAN). Subjects and Methods: After training of the deep learning GAN framework on a paired training dataset of PETNAC and the corresponding CT images of the head from 50 patients, pseudo-CT images were generated from PETNAC of 40 validation patients, of which 20 were used for technical validation and 20 stemming from patients with CNS disorders were used for clinical validation. Pseudo-CT was used for subsequent AC of these validation data sets resulting in independently attenuation-corrected PET data. Results: Visual inspection revealed a high degree of resemblance of generated pseudo-CT images compared to the acquired CT images in all validation data sets, with minor differences in individual anatomical details. Quantitative analyses revealed minimal underestimation below 5% of standardized uptake value (SUV) in all brain regions in independently attenuation-corrected PET data corn-pared to the reference PET images. Color-coded error maps showed no regional bias and only minimal average errors around +/- 0%. Using independently attenuation-corrected PET data, no differences in image-based diagnoses were observed in 20 patients with neurological disorders compared to the reference PET images. Conclusion: Independent AC of brain F-18-FDG PET is feasible with high accuracy using the proposed, easy to implement deep learning framework. Further evaluation in clinical cohorts will be necessary to assess the clinical performance of this method. [Armanious, Karim; Kuestner, Thomas; Nikolaou, Konstantin; Gatidis, Sergios] Univ Hosp Tubingen, Dept Radiol Diagnost & Intervent Radiol, Tubingen, Germany; [Armanious, Karim; Kuestner, Thomas; Yang, Bin] Univ Stuttgart, Inst Signal Proc & Syst Theory, Stuttgart, Germany; [Reimold, Matthias; La Fougere, Christian] Univ Hosp Tubingen, Dept Radiol Nucl Med, Tubingen, Germany; [Kuestner, Thomas] St Thomas Hosp, Kings Coll London, Sch Biomed Engn & Imaging Sci, London, England Gatidis, S (corresponding author), Univ Hosp Tubingen, Hoppe Seyler Str 3, D-72076 Tubingen, Germany. sergios.gatidis@med.uni-tuebingen.de la Fougere, Christian/AAN-2811-2021; Kuestner, Thomas/ABE-7866-2020; Gatidis, Sergios/AAF-4858-2020 la Fougere, Christian/0000-0001-7519-0417; Kuestner, Thomas/0000-0002-0353-4898; Gatidis, Sergios/0000-0002-6928-4967 Apostolopoulos DJ, 2016, HELL J NUCL MED, V19, P89, DOI 10.1967/s0024499100360; Armanious K, 2018, ARXIV E PRINTS; Ashburner J, 2012, NEUROIMAGE, V62, P791, DOI 10.1016/j.neuroimage.2011.10.025; Bailey DL, 1998, EUR J NUCL MED, V25, P774, DOI 10.1007/s002590050282; Berker Y, 2016, MED PHYS, V43, P807, DOI 10.1118/1.4938264; Bezrukov I, 2013, SEMIN NUCL MED, V43, P45, DOI 10.1053/j.semnuclmed.2012.08.002; Buchbender C, 2013, BRIT J RADIOL, V86, DOI 10.1259/bjr.20120570; Burgos N, 2014, IEEE T MED IMAGING, V33, P2332, DOI 10.1109/TMI.2014.2340135; Chartsias Agisilaos, 2017, ADVERSARIAL IMAGE SY; Chen YS, 2017, MAGN RESON IMAGING C, V25, P245, DOI 10.1016/j.mric.2016.12.001; Choi H, 2018, J NUCL MED, V59, P1111, DOI 10.2967/jnumed.117.199414; Dewan M, 2013, 2013 INT WORKSH PATT; Eldib M, 2016, PET CLIN, V11, P151, DOI 10.1016/j.cpet.2015.10.004; Evans AC, 2012, NEUROIMAGE, V62, P911, DOI 10.1016/j.neuroimage.2012.01.024; Gong K, 2018, PHYS MED BIOL, V63, DOI 10.1088/1361-6560/aac763; Goodfellow I., 2014, GENERATIVE ADVERSARI; Isola P, 2016, ARXIV E PRINTS; Kalantari F, 2011, HELL J NUCL MED, V14, P278; Kinahan PE, 1998, MED PHYS, V25, P2046, DOI 10.1118/1.598392; Kong E, 2015, HELL J NUCL MED, V18, P42; Kustner T, 2019, MAGN RESON MED, V82, P1527, DOI 10.1002/mrm.27783; Liu F, 2018, EJNMMI PHYS, V5, DOI 10.1186/s40658-018-0225-8; Liu F, 2018, RADIOLOGY, V286, P676, DOI 10.1148/radiol.2017170700; Merida I, 2017, PHYS MED BIOL, V62, P2834, DOI 10.1088/1361-6560/aa5f6c; MINOSHIMA S, 1995, J NUCL MED, V36, P1238; Oehmigen M, 2016, MED PHYS, V43, P4808, DOI 10.1118/1.4959546; Rausch I, 2017, J NUCL MED, V58, P1519, DOI 10.2967/jnumed.116.186148; Shawgi M, 2012, HELL J NUCL MED, V15, P215; Sjolund J, 2015, PHYS MED BIOL, V60, P825, DOI 10.1088/0031-9155/60/2/825; Tzourio-Mazoyer N, 2002, NEUROIMAGE, V15, P273, DOI 10.1006/nimg.2001.0978; Vandenberghe S, 2015, PHYS MED BIOL, V60, pR115, DOI 10.1088/0031-9155/60/4/R115; Wang Y, 2018, NEUROIMAGE, V174, P550, DOI 10.1016/j.neuroimage.2018.03.045; Watanabe M, 2017, PHYS MED BIOL, V62, P7148, DOI 10.1088/1361-6560/aa82e8; Wolterink JM, 2017, ARXIV E PRINTS; Xin WC, 2018, HELL J NUCL MED, V21, P48, DOI 10.1967/s002449910706 35 13 13 2 7 HELLENIC SOC NUCLEAR MEDICINE THESSALONIKI 51 HERMU ST, THESSALONIKI, 546 23, GREECE 1790-5427 HELL J NUCL MED Hell. J. Nucl. Med. SEP-DEC 2019 22 3 179 186 8 Radiology, Nuclear Medicine & Medical Imaging Radiology, Nuclear Medicine & Medical Imaging KA9VB WOS:000506147400004 31587027 2021-09-15 J Hayatbini, N; Kong, B; Hsu, KL; Nguyen, P; Sorooshian, S; Stephens, G; Fowlkes, C; Nemani, R; Ganguly, S Hayatbini, Negin; Kong, Bailey; Hsu, Kuo-lin; Phu Nguyen; Sorooshian, Soroosh; Stephens, Graeme; Fowlkes, Charless; Nemani, Ramakrishna; Ganguly, Sangram Conditional Generative Adversarial Networks (cGANs) for Near Real-Time Precipitation Estimation from Multispectral GOES-16 Satellite Imageries-PERSIANN-cGAN REMOTE SENSING English Article precipitation; multispectral satellite imagery; machine learning; convolutional neural networks (CNNs); generative adversarial networks (GANs) PASSIVE MICROWAVE; DEEP; INFORMATION; ALGORITHM In this paper, we present a state-of-the-art precipitation estimation framework which leverages advances in satellite remote sensing as well as Deep Learning (DL). The framework takes advantage of the improvements in spatial, spectral and temporal resolutions of the Advanced Baseline Imager (ABI) onboard the GOES-16 platform along with elevation information to improve the precipitation estimates. The procedure begins by first deriving a Rain/No Rain (R/NR) binary mask through classification of the pixels and then applying regression to estimate the amount of rainfall for rainy pixels. A Fully Convolutional Network is used as a regressor to predict precipitation estimates. The network is trained using the non-saturating conditional Generative Adversarial Network (cGAN) and Mean Squared Error (MSE) loss terms to generate results that better learn the complex distribution of precipitation in the observed data. Common verification metrics such as Probability Of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), Bias, Correlation and MSE are used to evaluate the accuracy of both R/NR classification and real-valued precipitation estimates. Statistics and visualizations of the evaluation measures show improvements in the precipitation retrieval accuracy in the proposed framework compared to the baseline models trained using conventional MSE loss terms. This framework is proposed as an augmentation for PERSIANN-CCS (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network- Cloud Classification System) algorithm for estimating global precipitation. [Hayatbini, Negin; Hsu, Kuo-lin; Phu Nguyen; Sorooshian, Soroosh] Univ Calif Irvine, Dept Civil & Environm Engn, Henry Samueli Sch Engn, Ctr Hydrometeorol & Remote Sensing CHRS, Irvine, CA 92697 USA; [Kong, Bailey; Fowlkes, Charless] Univ Calif Irvine, Dept Comp Sci, Irvine, CA 92697 USA; [Sorooshian, Soroosh] Univ Calif Irvine, Dept Earth Syst Sci, Irvine, CA 92697 USA; [Stephens, Graeme] CALTECH, Jet Prop Lab, Ctr Climate Sci, 4800 Oak Grove Dr, Pasadena, CA 91109 USA; [Nemani, Ramakrishna] NASA, Adv Supercomp Div, Ames Res Ctr Moffet Field, Mountain View, CA 94035 USA; [Ganguly, Sangram] NASA, Bay Area Environm Res Inst, Ames Res Ctr, Moffett Field, CA 94035 USA Hayatbini, N (corresponding author), Univ Calif Irvine, Dept Civil & Environm Engn, Henry Samueli Sch Engn, Ctr Hydrometeorol & Remote Sensing CHRS, Irvine, CA 92697 USA. nhayatbi@uci.edu; bhkong@ics.uci.edu; kuolinh@uci.edu; ndphu@uci.edu; soroosh@uci.edu; graeme.stephens@jpl.nasa.gov; charless.fowlkes@gmail.com; rama.nemani@nasa.gov; sangram.ganguly@nasa.gov Hayatbini, Negin/AAC-1107-2019; Nguyen, Phu/AAT-7451-2020; sorooshian, soroosh/B-3753-2008; Hsu, Kuolin/E-6120-2019 Hayatbini, Negin/0000-0003-3213-3951; sorooshian, soroosh/0000-0001-7774-5113; HSU, KUOLIN/0000-0002-3578-3565; Nguyen, Phu/0000-0002-9055-2583 U.S. Department of Energy (DOE)United States Department of Energy (DOE) [DE-IA0000018]; California Energy Commission (CEC Award) [300-15-005]; NASA MIRO grant [NNX15AQ06A]; NASA-Jet Propulsion Laboratory (JPL) Grant [1619578]; MASEEH fellowship The financial supports of this research are from U.S. Department of Energy (DOE Prime Award No. DE-IA0000018), California Energy Commission (CEC Award No. 300-15-005), MASEEH fellowship, NASA MIRO grant (NNX15AQ06A), and NASA-Jet Propulsion Laboratory (JPL) Grant (Award No. 1619578). Arjovsky M., 2017, ARXIV170107875, P214; Asanjan AA, 2018, J GEOPHYS RES-ATMOS, V123, P12543, DOI 10.1029/2018JD028375; Ba MB, 2001, J APPL METEOROL, V40, P1500, DOI 10.1175/1520-0450(2001)040<1500:GMRAG>2.0.CO;2; Behrangi A, 2010, J HYDROMETEOROL, V11, P1305, DOI 10.1175/2010JHM1248.1; Behrangi A, 2009, J HYDROMETEOROL, V10, P1414, DOI 10.1175/2009JHM1139.1; Behrangi A, 2009, J HYDROMETEOROL, V10, P684, DOI 10.1175/2009JHM1077.1; Bengio Y, 2009, FOUND TRENDS MACH LE, V2, P1, DOI 10.1561/2200000006; Danielson J.J., 2011, US GEOLOGICAL SURVEY, DOI DOI 10.3133/OFR20111073; ELMAN JL, 1990, COGNITIVE SCI, V14, P179, DOI 10.1207/s15516709cog1402_1; Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1; Goodfellow I, 2016, ARXIV170100160; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Hayatbini N, 2019, J HYDROMETEOROL, V20, P901, DOI 10.1175/JHM-D-18-0197.1; Hinton, 2009, SCHOLARPEDIA, V4, P5947, DOI DOI 10.4249/scholarpedia.5947; Hong Y, 2004, J APPL METEOROL, V43, P1834, DOI 10.1175/JAM2173.1; Huffman G.J., 2015, ALGORITHM THEOR BASI, V4, P26; Huffman GJ, 2007, J HYDROMETEOROL, V8, P38, DOI 10.1175/JHM560.1; Huszar F., 2015, ARXIV151105101; Isola P, 2017, ARXIV161107004V, DOI DOI 10.1109/CVPR.2017.632; Jordan M. I., 1997, ADV PSYCHOL, V121, P471, DOI [10.1016/s0166-4115(97)80111-2, DOI 10.1016/S0166-4115(97)80111-2]; Joyce RJ, 2004, J HYDROMETEOROL, V5, P487, DOI 10.1175/1525-7541(2004)005<0487:CAMTPG>2.0.CO;2; Kidd C, 2003, J HYDROMETEOROL, V4, P1088, DOI 10.1175/1525-7541(2003)004<1088:SREUCP>2.0.CO;2; Krizhevsky A., 2012, ADV NEURAL INFORM PR, V25, P1097; LeCun Y, 1989, NEURAL COMPUT, V1, P541, DOI 10.1162/neco.1989.1.4.541; LeCun Y, 2015, NATURE, V521, P436, DOI 10.1038/nature14539; Liu Y., 2016, INT C ADV BIG DAT AN; Liu ZY, 2015, J GEOPHYS RES-ATMOS, V120, P10116, DOI 10.1002/2015JD023787; Martin DW, 2008, J APPL METEOROL CLIM, V47, P525, DOI 10.1175/2007JAMC1525.1; Osindero S., 2014, CONDITIONAL GENERATI; Pan BX, 2019, WATER RESOUR RES, V55, P2301, DOI [10.1029/2018wr024090, 10.1029/2018WR024090]; Nguyen P, 2019, SCI DATA, V6, DOI 10.1038/sdata.2018.296; Pu Y., 2016, ADV NEURAL INFORM PR; Rasp S, 2018, P NATL ACAD SCI USA, V115, P9684, DOI 10.1073/pnas.1810286115; Reichstein M, 2019, NATURE, V566, P195, DOI 10.1038/s41586-019-0912-1; Ronneberger O, 2015, LECT NOTES COMPUT SC, V9351, P234, DOI 10.1007/978-3-319-24574-4_28; Schmidhuber J, 2015, NEURAL NETWORKS, V61, P85, DOI 10.1016/j.neunet.2014.09.003; Schmit T.J., 2010, P 6 ANN S FUT NAT OP; Schmit TJ, 2005, B AM METEOROL SOC, V86, P1079, DOI 10.1175/BAMS-86-8-1079; Shen DG, 2017, ANNU REV BIOMED ENG, V19, P221, DOI [10.1146/annurev-bioeng-071516044442, 10.1146/annurev-bioeng-071516-044442]; Shi X., 2015, ADV NEURAL INFORM PR, P802, DOI DOI 10.1007/978-3-319-21233-3_6; Sorooshian S, 2011, B AM METEOROL SOC, V92, P1353, DOI 10.1175/2011BAMS3158.1; Tao YM, 2018, J HYDROMETEOROL, V19, P393, DOI [10.1175/JHM-D-17-0077.1, 10.1175/jhm-d-17-0077.1]; Tao YM, 2017, J HYDROMETEOROL, V18, P1271, DOI [10.1175/JHM-D-16-0176.1, 10.1175/jhm-d-16-0176.1]; Tao YM, 2016, IEEE C EVOL COMPUTAT, P1349, DOI 10.1109/CEC.2016.7743945; Vandal T, 2019, THEOR APPL CLIMATOL, V137, P557, DOI 10.1007/s00704-018-2613-3; Vincent P, 2008, P 25 INT C MACH LEAR, P1096, DOI [10.1145/1390156.1390294, DOI 10.1145/1390156.1390294] 46 8 8 6 12 MDPI BASEL ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND 2072-4292 REMOTE SENS-BASEL Remote Sens. OCT 2019 11 19 2193 10.3390/rs11192193 17 Environmental Sciences; Geosciences, Multidisciplinary; Remote Sensing; Imaging Science & Photographic Technology Environmental Sciences & Ecology; Geology; Remote Sensing; Imaging Science & Photographic Technology JN3VH WOS:000496827100007 gold 2021-09-15 J Dominguez-Rodrigo, M; Fernandez-Jauregui, A; Cifuentes-Alcobendas, G; Baquedano, E Dominguez-Rodrigo, Manuel; Fernandez-Jauregui, Ander; Cifuentes-Alcobendas, Gabriel; Baquedano, Enrique Use of Generative Adversarial Networks (GAN) for Taphonomic Image Augmentation and Model Protocol for the Deep Learning Analysis of Bone Surface Modifications APPLIED SCIENCES-BASEL English Article generative adversarial networks; optimizer; activation function; neural networks; computer vision; taphonomy HIGH-ACCURACY; CUT MARKS; CLASSIFICATION Deep learning models are based on a combination of neural network architectures, optimization parameters and activation functions. All of them provide exponential combinations whose computational fitness is difficult to pinpoint. The intricate resemblance of the microscopic features that are found in bone surface modifications make their differentiation challenging, and determining a baseline combination of optimizers and activation functions for modeling seems necessary for computational economy. Here, we experiment with combinations of the most resolutive activation functions (relu, swish, and mish) and the most efficient optimizers (stochastic gradient descent (SGD) and Adam) for bone surface modification analysis. We show that despite a wide variability of outcomes, a baseline of relu-SGD is advised for raw bone surface modification data. For imbalanced samples, augmented datasets generated through generative adversarial networks are implemented, resulting in balanced accuracy and an inherent bias regarding mark replication. In summary, although baseline procedures are advised, these do not prevent to overcome Wolpert's "no free lunch" theorem and extend it beyond model architectures. [Dominguez-Rodrigo, Manuel; Fernandez-Jauregui, Ander; Cifuentes-Alcobendas, Gabriel; Baquedano, Enrique] Alcala Univ, Inst Evolut Africa IDEA, Covarrubias 36, Madrid 28010, Spain; [Dominguez-Rodrigo, Manuel; Cifuentes-Alcobendas, Gabriel] Univ Alcala De Henares, Dept Hist & Philosophy, Area Prehist, Alcala De Henares 28801, Spain; [Baquedano, Enrique] Reg Archaeol Museum Madrid, Plaza Bernardas S-N, Alcala De Henares 28001, Spain Dominguez-Rodrigo, M (corresponding author), Alcala Univ, Inst Evolut Africa IDEA, Covarrubias 36, Madrid 28010, Spain.; Dominguez-Rodrigo, M (corresponding author), Univ Alcala De Henares, Dept Hist & Philosophy, Area Prehist, Alcala De Henares 28801, Spain. manuel.dominguezr@uah.es; anderfernandezj@gmail.com; gabrcifu@ucm.es; enrique.baquedano@madrid.org Spanish Ministry of Education, Science and Universities [HAR2017-82463-C4-1-P] We thank the Spanish Ministry of Education, Science and Universities for funding this research (HAR2017-82463-C4-1-P). We also appreciate the constructive comments made by three reviewers. We would like to express our thanks to M. A. Mate-Gonzalez for having invited us to participate in this Special Issue. Abellan N, 2021, ARCHAEOL ANTHROP SCI, V13, DOI 10.1007/s12520-021-01273-9; Anderson A, 2018, PLOS ONE, V13, DOI 10.1371/journal.pone.0204368; Antoniou A., 2017, ARXIV PREPRINT ARXIV; Bourgeon L, 2017, PLOS ONE, V12, DOI 10.1371/journal.pone.0169486; Brownlee J., 2017, MACHINE LEARNING MAS; Brownlee J., 2018, BETTER DEEP LEARNING; Chang Q., 2020, P IEEE CVF C COMP VI, P13856; Cifuentes-Alcobendas G, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-55439-6; Dominguez-Rodrigo M, 2009, J ARCHAEOL SCI, V36, P2643, DOI 10.1016/j.jas.2009.07.017; Dominguez-Rodrigo M, 2020, SCI REP-UK, V10, DOI 10.1038/s41598-020-75994-7; Gommery D, 2011, CR PALEVOL, V10, P271, DOI 10.1016/j.crpv.2011.01.006; Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1; Goodfellow IJ, 2014, ADV NEUR IN, V27, P2672; Gurevych I., 2019, ARXIV190102671; Hansford J, 2018, SCI ADV, V4, DOI 10.1126/sciadv.aat6925; Jimenez-Garcia B, 2020, J R SOC INTERFACE, V17, DOI 10.1098/rsif.2020.0782; Jimenez-Garcia B, 2020, J R SOC INTERFACE, V17, DOI 10.1098/rsif.2020.0446; Jinsakul N, 2019, MATHEMATICS-BASEL, V7, DOI 10.3390/math7121170; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Lan L, 2020, FRONT PUBLIC HEALTH, V8, DOI 10.3389/fpubh.2020.00164; Langr J., 2019, GANS ACTION DEEP LEA; Ledig C, 2017, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2017.19; Mikolajczyk Agnieszka, 2018, P117, DOI 10.1109/IIPHDW.2018.8388338.; Misra D., 2019, ARXIV190808681; Nagarajan R., 2013, BAYESIAN NETWORKS R, P122; Espigares MP, 2019, SCI REP-UK, V9, DOI 10.1038/s41598-019-51957-5; Pizarro-Monzo M, 2020, ARCHAEOL ANTHROP SCI, V12, DOI 10.1007/s12520-019-00966-6; Scutari M., 2014, BAYESIAN NETWORKS EX; Shorten C, 2019, J BIG DATA-GER, V6, DOI 10.1186/s40537-019-0197-0; Sun Y., 2020, 11 IEEE INT C KNOWL, P227, DOI DOI 10.1109/ICBK50248.2020.00041; Wolpert DH, 1996, NEURAL COMPUT, V8, P1391, DOI 10.1162/neco.1996.8.7.1391; Yi X, 2019, MED IMAGE ANAL, V58, DOI 10.1016/j.media.2019.101552; Zhang WJ, 2021, PLATELETS, V32, P582, DOI 10.1080/09537104.2020.1786039 35 0 0 0 0 MDPI BASEL ST ALBAN-ANLAGE 66, CH-4052 BASEL, SWITZERLAND 2076-3417 APPL SCI-BASEL Appl. Sci.-Basel JUN 2021 11 11 5237 10.3390/app11115237 13 Chemistry, Multidisciplinary; Engineering, Multidisciplinary; Materials Science, Multidisciplinary; Physics, Applied Chemistry; Engineering; Materials Science; Physics SP4BZ WOS:000659616700001 gold 2021-09-15 J Shirasaki, M; Moriwaki, K; Oogi, T; Yoshida, N; Ikeda, S; Nishimichi, T Shirasaki, Masato; Moriwaki, Kana; Oogi, Taira; Yoshida, Naoki; Ikeda, Shiro; Nishimichi, Takahiro Noise reduction for weak lensing mass mapping: an application of generative adversarial networks to Subaru Hyper Suprime-Cam first-year data MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY English Article gravitational lensing: weak; methods: data analysis; large-scale structure of Universe; cosmology: observations COSMIC SHEAR; DARK-MATTER; POWER SPECTRUM; PEAK COUNTS; SIMULATIONS; STATISTICS; CLUSTERS; COSMOLOGY; CALIBRATION; ALIGNMENTS We propose a deep-learning approach based on generative adversarial networks (GANs) to reduce noise in weak lensing mass maps under realistic conditions. We apply image-to-image translation using conditional GANs to the mass map obtained from the first-year data of Subaru Hyper Suprime-Cam (HSC) Survey. We train the conditional GANs by using 25 000 mock HSC catalogues that directly incorporate a variety of observational effects. We study the non-Gaussian information in denoised maps using one-point probability distribution functions (PDFs) and also perform matching analysis for positive peaks and massive clusters. An ensemble learning technique with our GANs is successfully applied to reproduce the PDFs of the lensing convergence. About of the peaks in the denoised maps with height greater than 5 sigma have counterparts of massive clusters within a separation of 6 arcmin. We show that PDFs in the denoised maps are not compromised by details of multiplicative biases and photometric redshift distributions, nor by shape measurement errors, and that the PDFs show stronger cosmological dependence compared to the noisy counterpart. We apply our denoising method to a part of the first-year HSC data to show that the observed mass distribution is statistically consistent with the prediction from the standard Lambda CDM model. [Shirasaki, Masato] Natl Astron Observ Japan, Mitaka, Tokyo 1818588, Japan; [Shirasaki, Masato; Ikeda, Shiro] Inst Stat Math, Tachikawa, Tokyo 1908562, Japan; [Moriwaki, Kana; Yoshida, Naoki] Univ Tokyo, Dept Phys, Tokyo 1130033, Japan; [Oogi, Taira] Chiba Univ, Inst Management & Informat Technol, Chiba 2638522, Japan; [Yoshida, Naoki; Nishimichi, Takahiro] Univ Tokyo, Kavli Inst Phys & Math Universe WPI, Kashiwa, Chiba 2778583, Japan; [Yoshida, Naoki] Univ Tokyo, Inst Phys Intelligence, Tokyo 1130033, Japan; [Yoshida, Naoki] Univ Tokyo, Fac Sci, Res Ctr Early Universe, Tokyo 1130033, Japan; [Ikeda, Shiro] Grad Univ Adv Studies, Dept Stat Sci, 10-3 Midori Cho, Tachikawa, Tokyo 1908562, Japan; [Nishimichi, Takahiro] Kyoto Univ, Ctr Gravitat Phys, Yukawa Inst Theoret Phys, Kyoto 6068502, Japan Shirasaki, M (corresponding author), Natl Astron Observ Japan, Mitaka, Tokyo 1818588, Japan.; Shirasaki, M (corresponding author), Inst Stat Math, Tachikawa, Tokyo 1908562, Japan. masato.shirasaki@nao.ac.jp Ikeda, Shiro/0000-0002-2462-1448 MEXT KAKENHIMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of ScienceGrants-in-Aid for Scientific Research (KAKENHI) [18H04358, 19K14767]; Japan Science and Technology Agency CREST GrantJapan Science & Technology Agency (JST)Core Research for Evolutional Science and Technology (CREST) [JPMJCR1414]; Japan Science and Technology Agency AIP Acceleration Research Grant [JP20317829]; JSPS KAKENHIMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of ScienceGrants-in-Aid for Scientific Research (KAKENHI) [JP17K14273, JP19H00677]; Japanese Cabinet Office; Ministry of Education, Culture, Sports, Science and Technology (MEXT)Ministry of Education, Culture, Sports, Science and Technology, Japan (MEXT); Japan Society for the Promotion of Science (JSPS)Ministry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of Science; Japan Science and Technology Agency (JST)Japan Science & Technology Agency (JST); Malasiya Toray Science FoundationToray Industries, Inc.; NAOJ; Kavli IPMU; KEKHigh Energy Accelerator Research Organization (KEK); ASIAA; Princeton UniversityPrinceton University; National Aeronautics and Space AdministrationNational Aeronautics & Space Administration (NASA) [NNX08AR22G]; National Science FoundationNational Science Foundation (NSF) [AST-1238877] This work was in part supported by Grant-in-Aid for Scientific Research on Innovative Areas from the MEXT KAKENHI Grant Number (18H04358, 19K14767), and by Japan Science and Technology Agency CREST Grant Number JPMJCR1414 and AIP Acceleration Research Grant Number JP20317829. This work was also supported by JSPS KAKENHI Grant Numbers JP17K14273 and JP19H00677. Numerical computations presented in this paper were in part carried out on the general-purpose PC farm at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan.; The HSC collaboration includes the astronomical communities of Japan and Taiwan and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and PrincetonUniversity. Fundingwas contributed by the FIRST programme from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Malasiya Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.; This paper makes use of software developed for the Vera C. Rubin Observatory. We thank the LSST Project for making their code available as free software at http://dm.lsst.org.; The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant Number NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant Number AST-1238877, the University of Maryland, and Eotvos LorandUniversity (ELTE) and the LosAlamos National Laboratory.; Based [in part] on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan. Adami C, 2018, ASTRON ASTROPHYS, V620, DOI 10.1051/0004-6361/201731606; Ade PAR, 2016, ASTRON ASTROPHYS, V594, DOI 10.1051/0004-6361/201525830; Aihara H., 2018, PUBL ASTRON SOC JPN, V70, pS4, DOI DOI 10.1093/pasj/psx066; Ba S, 2015, TECHNOMETRICS, V57, P479, DOI 10.1080/00401706.2014.957867; Bartelmann M, 2001, PHYS REP, V340, P291, DOI 10.1016/S0370-1573(00)00082-X; Becker MR, 2013, MON NOT R ASTRON SOC, V435, P115, DOI 10.1093/mnras/stt1352; Behroozi PS, 2013, ASTROPHYS J, V762, DOI 10.1088/0004-637X/762/2/109; Bernstein GM, 2002, ASTRON J, V123, P583, DOI 10.1086/338085; Brock A, 2018, ARXIV 180911096; Castro T, 2018, MON NOT R ASTRON SOC, V478, P1305, DOI 10.1093/mnras/sty1117; Chang C, 2018, MON NOT R ASTRON SOC, V475, P3165, DOI 10.1093/mnras/stx3363; Clowe D, 2004, ASTROPHYS J, V604, P596, DOI 10.1086/381970; Coulton WR, 2019, J COSMOL ASTROPART P, DOI 10.1088/1475-7516/2019/05/043; Coupon J., 2018, PUBL ASTRON SOC JPN, V70, pS7, DOI DOI 10.1093/pasj/psx047; Crocce M, 2006, MON NOT R ASTRON SOC, V373, P369, DOI 10.1111/j.1365-2966.2006.11040.x; Dietrich JP, 2010, MON NOT R ASTRON SOC, V402, P1049, DOI 10.1111/j.1365-2966.2009.15948.x; Fan ZH, 2010, ASTROPHYS J, V719, P1408, DOI 10.1088/0004-637X/719/2/1408; Furusawa H., 2018, PUBL ASTRON SOC JPN, V70, pS3, DOI [10.1093/pasj/psx079, DOI 10.1093/pasj/psx079]; Goodfellow I, 2020, COMMUN ACM, V63, P139, DOI 10.1145/3422622; Gupta A, 2018, PHYS REV D, V97, DOI 10.1103/PhysRevD.97.103515; Hamana T, 2004, MON NOT R ASTRON SOC, V350, P893, DOI 10.1111/j.1365-2966.2004.07691.x; Hamana T, 2001, MON NOT R ASTRON SOC, V327, P169, DOI 10.1046/j.1365-8711.2001.04685.x; Hikage C, 2019, PUBL ASTRON SOC JPN, V71, DOI 10.1093/pasj/psz010; Hildebrandt H, 2017, MON NOT R ASTRON SOC, V465, P1454, DOI 10.1093/mnras/stw2805; Hinshaw G, 2013, ASTROPHYS J SUPPL S, V208, DOI 10.1088/0067-0049/208/2/19; Hirata C, 2003, MON NOT R ASTRON SOC, V343, P459, DOI 10.1046/j.1365-8711.2003.06683.x; Hirata CM, 2007, MON NOT R ASTRON SOC, V381, P1197, DOI 10.1111/j.1365-2966.2007.12312.x; Hu W, 2001, ASTROPHYS J, V554, P67, DOI 10.1086/321380; Huterer D, 2018, REP PROG PHYS, V81, DOI 10.1088/1361-6633/aa997e; Isola P, 2016, CORR; Jain B, 2000, ASTROPHYS J, V530, P547, DOI 10.1086/308384; Jarvis M, 2004, MON NOT R ASTRON SOC, V352, P338, DOI 10.1111/j.1365-2966.2004.07926.x; Jeffrey N, 2020, MON NOT R ASTRON SOC, V492, P5023, DOI 10.1093/mnras/staa127; KAISER N, 1993, ASTROPHYS J, V404, P441, DOI 10.1086/172297; Kingma D. P., 2014, ARXIV14126980, DOI DOI 10.1145/1830483.1830503; Komiyama Y., 2018, PUBL ASTRON SOC JPN, V70, pS2, DOI [10.1093/pasj/psx069, DOI 10.1093/pasj/psx069]; Kratochvil JM, 2010, PHYS REV D, V81, DOI 10.1103/PhysRevD.81.043519; Krause E, 2016, MON NOT R ASTRON SOC, V456, P207, DOI 10.1093/mnras/stv2615; Lewis A, 2000, ASTROPHYS J, V538, P473, DOI 10.1086/309179; Lin CA, 2015, ASTRON ASTROPHYS, V576, DOI 10.1051/0004-6361/201425188; Liu J, 2019, PHYS REV D, V99, DOI 10.1103/PhysRevD.99.083508; Mandelbaum R, PASJ, V70, pS25; Mandelbaum R, 2018, MON NOT R ASTRON SOC, V481, P3170, DOI 10.1093/mnras/sty2420; Marques G. A., 1906, JCAP, P019; Matilla JMZ, 2020, PHYS REV D, V102, DOI 10.1103/PhysRevD.102.123506; Matsubara T, 2001, ASTROPHYS J, V552, pL89, DOI 10.1086/320327; Miyazaki S, 2018, PUBL ASTRON SOC JPN, V70, DOI 10.1093/pasj/psx063; Miyazaki S, 2015, ASTROPHYS J, V807, DOI 10.1088/0004-637X/807/1/22; Moriwaki K., 2020, AJ, V906, P5; Murata R, 2019, PUBL ASTRON SOC JPN, V71, DOI 10.1093/pasj/psz092; Nishimichi T, 2019, ASTROPHYS J, V884, DOI 10.3847/1538-4357/ab3719; Nishimichi T, 2009, PUBL ASTRON SOC JPN, V61, P321, DOI 10.1093/pasj/61.2.321; Oguri M, 2018, PUBL ASTRON SOC JPN, V70, DOI 10.1093/pasj/psx042; Oguri M, 2014, MON NOT R ASTRON SOC, V444, P147, DOI 10.1093/mnras/stu1446; Osato K, 2015, ASTROPHYS J, V806, DOI 10.1088/0004-637X/806/2/186; Pen UL, 2003, ASTROPHYS J, V592, P664, DOI 10.1086/375734; Petri A, 2015, PHYS REV D, V91, DOI 10.1103/PhysRevD.91.103511; Press WH., 1992, NUMERICAL RECIPES FO; Ribli D, 2019, NAT ASTRON, V3, P93, DOI 10.1038/s41550-018-0596-8; Ronneberger O, 2015, MICCAI 2015 MED IMAG, P234; Sato J., 2001, ASTROPHYS J 2, V551, pL5; Sato M, 2009, ASTROPHYS J, V701, P945, DOI 10.1088/0004-637X/701/2/945; Schneider P, 1996, MON NOT R ASTRON SOC, V283, P837, DOI 10.1093/mnras/283.3.837; Schneider P, 2002, ASTRON ASTROPHYS, V396, P1, DOI 10.1051/0004-6361:20021341; SEITZ C, 1995, ASTRON ASTROPHYS, V297, P287; Sgier R., 2017, ARXIV170705167; Shirasaki M, 2019, PHYS REV D, V100, DOI 10.1103/PhysRevD.100.043527; Shirasaki M, 2019, MON NOT R ASTRON SOC, V486, P52, DOI 10.1093/mnras/stz791; Shirasaki M, 2017, MON NOT R ASTRON SOC, V470, P3476, DOI 10.1093/mnras/stx1477; Shirasaki M, 2017, MON NOT R ASTRON SOC, V466, P2402, DOI 10.1093/mnras/stw3254; Shirasaki M, 2017, MON NOT R ASTRON SOC, V465, P1974, DOI 10.1093/mnras/stw2950; Shirasaki M, 2015, MON NOT R ASTRON SOC, V453, P3043, DOI 10.1093/mnras/stv1854; Shirasaki M, 2013, ASTROPHYS J, V774, DOI 10.1088/0004-637X/774/2/111; Springel V, 2005, MON NOT R ASTRON SOC, V364, P1105, DOI 10.1111/j.1365-2966.2005.09655.x; Takada M, 2003, ASTROPHYS J, V583, pL49, DOI 10.1086/368066; Takahashi R, 2017, ASTROPHYS J, V850, DOI 10.3847/1538-4357/aa943d; Tanaka M., 2018, PUBL ASTRON SOC JPN, V70, pS9, DOI DOI 10.1093/pasj/psx077; Taruya A, 2002, ASTROPHYS J, V571, P638, DOI 10.1086/340048; Troxel MA, 2018, PHYS REV D, V98, DOI 10.1103/PhysRevD.98.043528; Troxel MA, 2015, PHYS REP, V558, P1, DOI 10.1016/j.physrep.2014.11.001; Troxel M. A, MNRAS, V479, P4998; Tyson J. A., 1990, ASTROPHYS J, V349, pL1; Valageas P, 2011, ASTRON ASTROPHYS, V527, DOI 10.1051/0004-6361/201015685; Vikram V, 2015, PHYS REV D, V92, DOI 10.1103/PhysRevD.92.022006; Wang S, 2009, ASTROPHYS J, V691, P547, DOI 10.1088/0004-637X/691/1/547; Zaldarriaga M, 2003, ASTROPHYS J, V584, P559, DOI 10.1086/345789 86 0 0 0 0 OXFORD UNIV PRESS OXFORD GREAT CLARENDON ST, OXFORD OX2 6DP, ENGLAND 0035-8711 1365-2966 MON NOT R ASTRON SOC Mon. Not. Roy. Astron. Soc. JUN 2021 504 2 1825 1839 10.1093/mnras/stab982 15 Astronomy & Astrophysics Astronomy & Astrophysics SP1SW WOS:000659453800020 Green Submitted 2021-09-15