1. Angluin, D.: Queries and concept learning. Mach. Learn. 2(4), 319–342 (1988) 2. Ash, J., Zhang, C., Krishnamurthy, A., Langford, J., Agarwal, A.: Deep batch active learning by diverse, uncertain gradient lower bounds. In: Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia (2020) 3. Azimi, J., Fern, A,. Fern, X., Borradaile, G., Heeringa, B.: Batch active learning via coordinated matching. In: Proceedings of the 29th International Conference on Machine Learning, Edinburgh, Scotland, UK (2012) 4. Beluch, W., Genewein, T., Nurnberger, A., Kohler, J.M.: The power of ensembles for active learning in image classification. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, pp. 9368–9377 (2018) 5. Bilgic, M., Getoor, L.: Link-based active learning. In: NIPS Workshop on Analyzing Networks and Learning with Graphs (2009) 6. Blitzer, J., Dredze, M., Pereira, F.: Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, Prague, Czech Republic (2007) 7. Bloodgood, M., Vijay-Shanker, K.: Taking into account the differences between actively and passively acquired data: the case of active learning with support vector machines for imbalanced datasets. In: Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Boulder, Colorado, USA, pp. 137–140 (2009) 8. Brunton, S., Proctor, J., Kutz, J.: Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. 113(15), 3932–3937 (2016) 9. Chinesta, F., Cueto, E., Abisset-Chavanne, E., Duval, J.L., Khaldi, F.El.: Virtual, digital and hybrid twins: a new paradigm in data-based engineering and engineered data. Arch. Comput. Methods Eng. 27(1), 105–134 (2020) 10. Chinesta, F., Huerta, A., Rozza, G., Willcox, K.: Encyclopedia of Computational Mechanics. Wiley, New York (2015) 11. Dagan, I., Engelson, P.: Committee-based sampling for training probabilistic classifiers. In: Machine Learning, Proceedings of the Twelfth International Conference on Machine Learning, Tahoe City, California, USA, pp. 150–157 (1995) 12. Ducoffe, M., Precioso, F.: Adversarial Active Learning for Deep Networks: a Margin Based Approach. arXiv: 1802. 09841 (2018) 13. Freytag, A., Rodner, E., Denzler, J.: Selecting influential examples: active learning with expected model output changes. In: Computer Vision-ECCV 2014—13th European Conference, Zurich, Switzerland, pp. 562–577 (2014) 14. Gal, Y., Islam, R., Ghahramani, Z.: Deep bayesian active learning with image data. In: Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia, vol. 70, pp. 1183– 1192 (2017) 15. Geifman, Y., El-Yaniv, R.: Deep Active Learning over the Long Tail. arXiv: 1711. 00941 (2017) 16. Guo, Y.: Active instance sampling via matrix partition. In: Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems, Vancouver, BC, Canada, pp. 802–810 (2010) 17. Hernandez, Q., Badias, A., Gonzalez, D., Chinesta, F., Cueto, E.: Deep Learning of ThermodynamicsAware Reduced-Order Models From Data. arXiv: 2007. 03758 (2020) 18. Hernandez, Q., D’Gonzalez, A., Chinesta, F., Cueto, E.: Structure-preserving neural networks. J. Comput. Phys. 426, 109950 (2021) 19. Huang, J., Child, R., Rao, V., Liu, H., Satheesh, S., Coates, A.: Active Learning for Speech Recognition: the Power of Gradients. arXiv: 1612. 03226 (2016) 20. Ibanez, R., Abisset-Chavanne, E., Ammar, A., González, D., Cueto, E., Huerta, A., Duval, J.L., Chinesta, F.: A multidimensional data-driven sparse identification technique: the sparse proper generalized decomposition. Complexity, 2018, 5608286 (2018). https:// doi. org/ 10. 1155/ 2018/ 56082 86 21. Ibanez, R., Abisset-Chavanne, E., Cueto, E., Ammar, A., Duval, J.-L., Chinesta, F.: Some applications of compressed sensing in computational mechanics: model order reduction, manifold learning, datadriven applications and nonlinear dimensionality reduction. Comput. Mech. 64(5), 1259–1271 (2019) 22. Joshi, A., Porikli, F., Papanikolopoulos, N.: Multi-class active learning for image classification. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2372–2379 (2009) 23. King, R.D., Whelan, K.E., Jones, F.M., Reiser, P.G.K., Bryant, C.H., Muggleton, S.H., Kell, D.B., Oliver, S.G.: Functional genomic hypothesis generation and experimentation by a robot scientist. Nature 427(6971), 247–252 (2004) 24. Krishnamurthy, V.: Algorithms for optimal scheduling and management of hidden Markov model sensors. IEEE Trans. Signal Process. 50(6), 1382–1397 (2002) 25. Laughlin, R., Pines, D.: The theory of everything. Proc. Natl. Acad. Sci. USA 97(1), 28–31 (2000) 26. Lewis, D., Gale, W.: A Sequential Algorithm for Training Text Classifiers, pp. 3–12 (1994) 27. Loyola, D., Pedergnana, M., Gimeno, S.: Smart sampling and incremental function learning for very large high dimensional data. Neural Netw. 78, 75–87 (2015) 28. McKay, M.D., Conover, W.J., Beckman, R.: A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21, 239–245 (1979) 29. Moya, B., Badias, A., Alfaro, I., Chinesta, F., Cueto, E.: Digital twins that learn and correct themselves. Int. J. Numer. Methods Eng. 123(13), 3034–3044 (2022). https:// doi. org/ 10. 1002/ nme. 6535 30. Nguyen, T., Smeulders, A.: Active learning using pre-clustering. In: ICML, pp. 79–79 (2004) 31. Nocedal, J., Wright, S.: Numerical Optimization, pp. 529–562. Springer, Berlin (2000) 32. Pinillo, R., Abisset-Chavanne, E., Ammar, A., et al.: A multidimensional data-driven sparse identification technique: the sparse proper generalized decomposition. Complexity 11, 1–11 (2018) 33. Ranganathan, H., Venkateswara, H., Chakraborty, S., Panchanathan, S.: Deep active learning for image classification. In: 2017 IEEE International Conference on Image Processing, pp. 3934–3938 (2017) 34. Ren, P., Xiao, Y., Chang, X., Huang, P.-Y., Li, Z., Gupta, B.B., Chen, X., Wang, X.: A Survey of Deep Active Learning. arXiv: 2009. 00236 (2021) 35. Roy, N., McCallum, A.: Toward optimal active learning through Monte Carlo estimation of error reduction. In: ICML, pp. 441–448 (2001) 36. Sancarlos, A., Cameron, M., Abel, A., Cueto, E., Duval, J.-L., Chinesta, F.: From ROM of electrochemistry to AI-based battery digital and hybrid twin. In: Archives of Computational Methods in Engineering, pp. 1–37 (2020) 37. Sancarlos, A., Champaney, V., Duval, J.L., Cueto, E., Chinesta, F.: PGD-based advanced nonlinear multiparametric regressions for constructing metamodels at the scarce-data limit. arXiv: 2103. 05358 (2021) 38. Sener, O., Savarese, S.: Active learning for convolutiopnal neural networks: a core-set approach. arXiv: 1708. 00489 (2018) 39. Settles, B.: Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison (2010) 40. Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. Adv. Neural Info. Process. Syst. 20, 1289–1296 (2008) 41. Seung, H., Opper, M., Sompolinsky, H.: Query by committee. In: Proceedings of the 5th Annual Workshop on Computational Learning Theory, pp. 287–294 (1992) 42. Shui, C., Zhou, F., Gagne, C., Wang, B.: Deep active learning: unified and principled method for query and training. In: International Conference on Artificial Intelligence and Statistics, PMLR, pp. 1308–1318 (2020) 43. Stein, M.: Large sample properties of simulations using latin hypercube sampling. Technometrics 29(2), 143–151 (1987) 44. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Ian Goodfellow, I., Fergus, R.: Intriguing Properties of Neural Networks. arXiv: 1312. 6199 (2014) 45. Tong, S., Koller, D.: Support vector machine active learning with applications to text classification. J. Mach. Learn. Res. 2(1), 45–66 (2002) 46. Torregrosa, S., Champaney, V., Ammar, A., Herbert, V., Chinesta, F.: Surrogate parametric metamodel based on optimal transport. Math. Comput. Simul. 194, 36–63 (2021) 47. Udrescu, S., Tan, A., Feng, J., Neto, O., Wu, T., Tegmark, M.: Ai Feynman 2.0: Pareto-Optimal Symbolic Regression Exploiting Graph Modularity. arXiv: 2006. 10782 (2006) 48. Yang, Y., Loog, M.: A Benchmark and Comparison of Active Learning for Logistic Regression. arXiv: 1611. 08618 (2018) 49. Yin, C., Qian, B., Cao, S., et al.: Deep similarity-based batch mode active learning with explorationexploitation. In: IEEE International Conference on Data Mining, pp. 575–584 (2017) 50. Zhdanov, F.: Diverse Mini-Batch Active Learning. arXiv: 1901. 05954 (2019) |