Towards a dynamic spatial microsimulation model for projecting Auckland’s spatial distribution of ethnic groups

In this paper we describe the development, calibration and validation of a dynamic spatial microsimulation model for projecting small area (area unit) ethnic populations in Auckland, New Zealand. The key elements of the microsimulation model are a module that projects spatial mobility (migration) within Auckland and between Auckland and the rest of the world, and a module that projects ethnic mobility. The model is developed and calibrated using 1996-2001 New Zealand Linked Census (i.e. longitudinal) data, and then projected forward to 2006. We then compare the results with the actual 2006 population. We find that in terms of indexes of overall residential sorting and ethnic diversity, our p..

Computational Economics

Do patents really foster innovation in the pharmaceutical sector? Results from an evolutionary, agent-based model

The role of the patent system in the pharmaceutical sector is highly debated also due to its strong public health implications. In this paper we develop an evolutionary, agent-based model of the pharmaceutical industry to explore the impact of different configurations of the patent system upon innovation and competition. The model is able to replicate the main stylized facts of the drug industry as emergent properties. We perform policy experiments to assess the impact of different IPR regimes changing the breadth and length of patents. Results suggest that enlarging the extent and duration of patents yields adverse effects in terms of innovation outcomes, as well as of market competition an..

Computational Economics

AgriLOVE: agriculture, land-use and technical change in an evolutionary, agent-based model.

The paper focuses on the capital structure of firms in their early years of operation. Through the lens of Pecking Order Theory, we study how the pursuit of innovation influences the reliance of firms on different types of internal and external finance. Panel analyses of data on 7,394 German start-ups show that innovation activities are relevant predictors of the start-ups' revealed preferences for finance, and that the nature of these effects on the type and order of financing sources depends on the degree of information asymmetries specific to research and development activities, human capital endowments, and the market introduction of new products and processes.

Computational Economics

Learning about Unprecedented Events: Agent-Based Modelling and the Stock Market Impact of COVID-19

We model the learning process of market traders during the unprecedented COVID-19 event. We introduce a behavioral heterogeneous agents’ model with bounded rationality by including a correction mechanism through representativeness (Gennaioli et al., 2015). To inspect the market crash induced by the pandemic, we calibrate the STOXX Europe 600 Index, when stock markets suffered from the greatest single-day percentage drop ever. Once the extreme event materializes, agents tend to be more sensitive to all positive and negative news, subsequently moving on to close-to-rational. We find that the deflation mechanism of less representative news seems to disappear after the extreme event.

Computational Economics

Spatial regression graph convolutional neural networks: A deep learning paradigm for spatial multivariate distributions

Geospatial artificial intelligence (GeoAI) has emerged as a subfield of GIScience that uses artificial intelligence approaches and machine learning techniques for geographic knowledge discovery. The non-regularity of data structures has recently led to different variants of graph neural networks in the field of computer science, with graph convolutional neural networks being one of the most prominent that operate on non- euclidean structured data where the numbers of nodes connections vary and the nodes are unordered. These networks use graph convolution - commonly known as filters or kernels - in place of general matrix multiplication in at least one of their layers. This paper suggests spa..

Computational Economics

AgriLOVE: agriculture, land-use and technical change in an evolutionary, agent-based model.

This paper presents a novel agent-based model of land use and technological change in the agricultural sector under environmental boundaries, finite available resources and changing land productivity. In particular, we model a spatially explicit economy populated by boundedly-rational farmers competing and innovating to fulfill an exogenous demand for food, while coping with a changing environment shaped by their production choices. Given the strong technological and environmental uncertainty, farmers learn and adaptively employ heuristics which guide their decisions on engaging in innovation and imitation activities, hiring workers, acquiring new farms, deforesting virgin areas and abandoni..

Computational Economics

Extracting Firms' Short-Term Inflation Expectations from the Economy Watchers Survey Using Text Analysis

This paper discusses the Price Sentiment Index (PSI), a quantitative indicator of firms' outlook for general prices proposed by Otaka and Kan (2018). The PSI is developed from the textual data of the Economy Watchers Survey conducted by the Cabinet Office; it is computed by extracting firms' views from survey comments, using text analysis. In this paper, we revisit the PSI and quantitatively analyze the determinants of changes in the PSI and the relationship between the PSI and macroeconomic variables. We also address a shortcoming in the text analysis used for computing the PSI that we discover when examining the performance of the PSI since the COVID-19 outbreak. The results of our analyse..

Computational Economics

Reputational Assets and Social Media Marketing Activeness: Empirical Insights from China

We explore the linkages between social media marketing activeness and reputational assets on digital platforms with a unique sample of over 8,000 customer-to-customer (C2C) sellers registered on both Taobao, China’s largest C2C online shopping platform, and Sina Weibo, China’s largest microblogging platform. A unique collaborative effort between the two platforms enables us to examine whether C2C sellers are motivated to engage in marketing activities on a separate social media platform. Applying machine learning and natural language processing methods, we first identify whether C2C sellers conduct social media marketing on their microblogs. We then differentiate between earned and owned..

Computational Economics

A Simple EU Model in EViews

Many studies with different methods (CGE models, DSGE models, structural gravity equations) have recently evaluated EU's Single Market. The problem with all these studies is that they use complex models with data sets which are not replicable. The aim of this paper is to develop a simple EU model which uses readily accessible data, and which is replicable in EViews. First the 10 equations macro model is used to evaluate Austria's EU membership since 1995. Then the same prototype model is applied to make a comparison of the integration effects of a selected number of EU Member States. Our simple EU model covers the essential economic effects of EU integration of EU's Single Market, the introd..

Computational Economics

Credit Rating Agencies: Evolution or Extinction?

Credit Rating Agencies (CRAs) have been around for more than 150 years. Their role evolved from mere information collectors and providers to quasi-official arbitrators of credit risk throughout the global financial system. They compiled information that -at the time- was too difficult and costly for their clients to gather on their own. After the 1929 big market crash, they started to play a more formal role. Since then, we see a growing reliance of investors on the CRAs ratings. After the global financial crisis of 2007, the CRAs became the focal point of criticism by economists, politicians, the media, market participants and official regulatory agencies. The reason was obvious: the CRAs f..

Computational Economics

A Quantum Generative Adversarial Network for distributions

Generative Adversarial Networks are becoming a fundamental tool in Machine Learning, in particular in the context of improving the stability of deep neural networks. At the same time, recent advances in Quantum Computing have shown that, despite the absence of a fault-tolerant quantum computer so far, quantum techniques are providing exponential advantage over their classical counterparts. We develop a fully connected Quantum Generative Adversarial network and show how it can be applied in Mathematical Finance, with a particular focus on volatility modelling.

Computational Economics

Predicting Credit Risk for Unsecured Lending: A Machine Learning Approach

Since the 1990s, there have been significant advances in the technology space and the e-Commerce area, leading to an exponential increase in demand for cashless payment solutions. This has led to increased demand for credit cards, bringing along with it the possibility of higher credit defaults and hence higher delinquency rates, over a period of time. The purpose of this research paper is to build a contemporary credit scoring model to forecast credit defaults for unsecured lending (credit cards), by employing machine learning techniques. As much of the customer payments data available to lenders, for forecasting Credit defaults, is imbalanced (skewed), on account of a limited subset of def..

Computational Economics

RieszNet and ForestRiesz: Automatic Debiased Machine Learning with Neural Nets and Random Forests

Many causal and policy effects of interest are defined by linear functionals of high-dimensional or non-parametric regression functions. $\sqrt{n}$-consistent and asymptotically normal estimation of the object of interest requires debiasing to reduce the effects of regularization and/or model selection on the object of interest. Debiasing is typically achieved by adding a correction term to the plug-in estimator of the functional, that is derived based on a functional-specific theoretical derivation of what is known as the influence function and which leads to properties such as double robustness and Neyman orthogonality. We instead implement an automatic debiasing procedure based on automat..

Computational Economics

Can an AI agent hit a moving target?

As the economies we live in are evolving over time, it is imperative that economic agents in models form expectations that can adjust to changes in the environment. This exercise offers a plausible expectation formation model that connects to computer science, psychology and neural science research on learning and decision-making, and applies it to an economy with a policy regime change. Employing the actor-critic model of reinforcement learning, the agent born in a fresh environment learns through first interacting with the environment. This involves taking exploratory actions and observing the corresponding stimulus signals. This interactive experience is then used to update its subjective..

Computational Economics

Learning to Classify and Imitate Trading Agents in Continuous Double Auction Markets

Continuous double auctions such as the limit order book employed by exchanges are widely used in practice to match buyers and sellers of a variety of financial instruments. In this work, we develop an agent-based model for trading in a limit order book and show (1) how opponent modelling techniques can be applied to classify trading agent archetypes and (2) how behavioural cloning can be used to imitate these agents in a simulated setting. We experimentally compare a number of techniques for both tasks and evaluate their applicability and use in real-world scenarios.

Computational Economics

Traders in a Strange Land: Agent-based discrete-event market simulation of the Figgie card game

Figgie is a card game that approximates open-outcry commodities trading. We design strategies for Figgie and study their performance and the resulting market behavior. To do this, we develop a flexible agent-based discrete-event market simulation in which agents operating under our strategies can play Figgie. Our simulation builds upon previous work by simulating latencies between agents and the market in a novel and efficient way. The fundamentalist strategy we develop takes advantage of Figgie's unique notion of asset value, and is, on average, the profit-maximizing strategy in all combinations of agent strategies tested. We develop a strategy, the "bottom-feeder", which estimates value by..

Computational Economics

Deep Learning for Principal-Agent Mean Field Games

Here, we develop a deep learning algorithm for solving Principal-Agent (PA) mean field games with market-clearing conditions -- a class of problems that have thus far not been studied and one that poses difficulties for standard numerical methods. We use an actor-critic approach to optimization, where the agents form a Nash equilibria according to the principal's penalty function, and the principal evaluates the resulting equilibria. The inner problem's Nash equilibria is obtained using a variant of the deep backward stochastic differential equation (BSDE) method modified for McKean-Vlasov forward-backward SDEs that includes dependence on the distribution over both the forward and backward p..

Computational Economics

Solving Multistage Stochastic Linear Programming via Regularized Linear Decision Rules: An Application to Hydrothermal Dispatch Planning

The solution of multistage stochastic linear problems (MSLP) represents a challenge for many applications. Long-term hydrothermal dispatch planning (LHDP) materializes this challenge in a real-world problem that affects electricity markets, economies, and natural resources worldwide. No closed-form solutions are available for MSLP and the definition of non-anticipative policies with high-quality out-of-sample performance is crucial. Linear decision rules (LDR) provide an interesting simulation-based framework for finding high-quality policies to MSLP through two-stage stochastic models. In practical applications, however, the number of parameters to be estimated when using an LDR may be clos..

Computational Economics

Machine Learning, Deep Learning, and Hedonic Methods for Real Estate Price Prediction

In recent years several complaints about racial discrimination in appraising home values have been accumulating. For several decades, to estimate the sale price of the residential properties, appraisers have been walking through the properties, observing the property, collecting data, and making use of the hedonic pricing models. However, this method bears some costs and by nature is subjective and biased. To minimize human involvement and the biases in the real estate appraisals and boost the accuracy of the real estate market price prediction models, in this research we design data-efficient learning machines capable of learning and extracting the relation or patterns between the inputs (f..

Computational Economics

Towards Robust Representation of Limit Orders Books for Deep Learning Models

The success of machine learning models is highly reliant on the quality and robustness of representations. The lack of attention on the robustness of representations may boost risks when using data-driven machine learning models for trading in the financial markets. In this paper, we focus on representations of the limit order book (LOB) data and discuss the opportunities and challenges of representing such data in an effective and robust manner. We analyse the issues associated with the commonly-used LOB representation for machine learning models from both theoretical and experimental perspectives. Based on this, we propose new LOB representation schemes to improve the performance and robus..

Computational Economics

Deep Learning of Potential Outcomes

This review systematizes the emerging literature for causal inference using deep neural networks under the potential outcomes framework. It provides an intuitive introduction on how deep learning can be used to estimate/predict heterogeneous treatment effects and extend causal inference to settings where confounding is non-linear, time varying, or encoded in text, networks, and images. To maximize accessibility, we also introduce prerequisite concepts from causal inference and deep learning. The survey differs from other treatments of deep learning and causal inference in its sharp focus on observational causal estimation, its extended exposition of key algorithms, and its detailed tutorials..

Computational Economics

Efficient Estimation in NPIV Models: A Comparison of Various Neural Networks-Based Estimators

We investigate the computational performance of Artificial Neural Networks (ANNs) in semi-nonparametric instrumental variables (NPIV) models of high dimensional covariates that are relevant to empirical work in economics. We focus on efficient estimation of and inference on expectation functionals (such as weighted average derivatives) and use optimal criterion-based procedures (sieve minimum distance or SMD) and novel efficient score-based procedures (ES). Both these procedures use ANN to approximate the unknown function. Then, we provide a detailed practitioner's recipe for implementing these two classes of estimators. This involves the choice of tuning parameters both for the unknown func..

Computational Economics

Hotel Preference Rank based on Online Customer Review

Topline hotels are now shifting into the digital way in how they understand their customers to maintain and ensuring satisfaction. Rather than the conventional way which uses written reviews or interviews, the hotel is now heavily investing in Artificial Intelligence particularly Machine Learning solutions. Analysis of online customer reviews changes the way companies make decisions in a more effective way than using conventional analysis. The purpose of this research is to measure hotel service quality. The proposed approach emphasizes service quality dimensions reviews of the top-5 luxury hotel in Indonesia that appear on the online travel site TripAdvisor based on section Best of 2018. In..

Computational Economics

Dyadic Double/Debiased Machine Learning for Analyzing Determinants of Free Trade Agreements

This paper presents novel methods and theories for estimation and inference about parameters in econometric models using machine learning of nuisance parameters when data are dyadic. We propose a dyadic cross fitting method to remove over-fitting biases under arbitrary dyadic dependence. Together with the use of Neyman orthogonal scores, this novel cross fitting method enables root-$n$ consistent estimation and inference robustly against dyadic dependence. We illustrate an application of our general framework to high-dimensional network link formation models. With this method applied to empirical data of international economic networks, we reexamine determinants of free trade agreements (FTA..

Computational Economics

Emission distribution and incidence of national mitigation policies among households in Austria

One major barrier for the feasibility of national climate policies is limited public acceptance because of distributional concerns. In the literature, different approaches are used to investigate the incidence of climate policies across income groups. We apply three approaches of incidence analysis to the case of Austria, that vary in terms of data and computational intensity: (i) household fuel expenditure analysis, (ii) household carbon footprints and (iii) macroeconomic general equilibrium modelling with heterogeneous households. As concerns about heterogeneity within low-income groups (horizontal equity) were recently articulated as main objection for effective redistributive revenue rec..

Computational Economics

Reinforcement Learning for Systematic FX Trading

We conduct a detailed experiment on major cash fx pairs, accurately accounting for transaction and funding costs. These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner via a quadratic utility, which learns to target a position directly. We improve upon earlier work, by casting the problem of learning to target a risk position, in an online learning context. This online learning occurs sequentially in time, but also in the form of transfer learning. We transfer the output of radial basis function hidden processing units, whose means, covariances and overall size are determined by Gaussian mixt..

Computational Economics

Discovering new plausibility checks for supervisory data

In carrying out its banking supervision tasks as part of the Single Supervisory Mechanism (SSM), the European Central Bank (ECB) collects and disseminates data on significant and less significant institutions. To ensure harmonised supervisory reporting standards, the data are represented through the European Banking Authority’s data point model, which defines all the relevant business concepts and the validation rules. For the purpose of data quality assurance and assessment, ECB experts may implement additional plausibility checks on the data. The ECB is constantly seeking ways to improve these plausibility checks in order to detect suspicious or erroneous values and to provide high-quali..

Computational Economics

Towards a fully RL-based Market Simulator

We present a new financial framework where two families of RL-based agents representing the Liquidity Providers and Liquidity Takers learn simultaneously to satisfy their objective. Thanks to a parametrized reward formulation and the use of Deep RL, each group learns a shared policy able to generalize and interpolate over a wide range of behaviors. This is a step towards a fully RL-based market simulator replicating complex market conditions particularly suited to study the dynamics of the financial market under various scenarios.

Computational Economics

Investigating government spending multiplier for the US economy: empirical evidence using a triple lasso approach

An essential dilemma in economics that has yielded ambiguous answers is whether governments should spend more in recessions. This paper provides an extension of the work of Ramey & Zubairy (2018) for the US economy according to which the government spending multipliers are below unity, especially when the economy experiences severe slack. Nonetheless, their work suffered from some limitations with respect to invertibility and weak instrument problem. The contribution of this paper is twofold: Firstly, it provides evidence that a triple lasso approach for the lag selection is a useful tool in removing the invertibility issues and the weak instrument problem. Secondly, the main results using a..

Computational Economics

Sector Volatility Prediction Performance Using GARCH Models and Artificial Neural Networks

Recently artificial neural networks (ANNs) have seen success in volatility prediction, but the literature is divided on where an ANN should be used rather than the common GARCH model. The purpose of this study is to compare the volatility prediction performance of ANN and GARCH models when applied to stocks with low, medium, and high volatility profiles. This approach intends to identify which model should be used for each case. The volatility profiles comprise of five sectors that cover all stocks in the U.S stock market from 2005 to 2020. Three GARCH specifications and three ANN architectures are examined for each sector, where the most adequate model is chosen to move on to forecasting. T..

Computational Economics