Publications and Preprints
J. Bodik, Y. Huang and
B. Yu (2025,
preprint, submitted to JRSSb)
Cross-World Assumption and
Refining Prediction Intervals for Individual Treatment Effects
J. Bodik (2025, preprint, submitted to CleaR)
Counterfactual prediction under
cross-world dependence
I. Azizi✶, J.
Bodik✶, J.
Heiss✶, and B. Yu (2025,
preprint, submitted to ICLR)
✶Equal contribution
CLEAR: Calibrated Learning
for Epistemic and Aleatoric Risk
J. Bodik and O.
Pasche (2024, preprint, submitted to Bernoulli)
Granger causality in
extremes
J. Bodik and V.
Chavez-Demoulin (2025, Journal of Machine Learning
Research)
Identifiability of
causal graphs under nonadditive conditionally parametric causal
models
J. Bodik and V.
Chavez-Demoulin (2025, Biometrika)
Structural
restrictions in local causal discovery: identifying direct causes of a
target variable
J. Bodik (2024, Mathematics)
Extreme Treatment
Effect: Extrapolating Causal Effects Into Extreme Treatment Domain
J. Bodik, Z.
Pawlas, M. Paluš (2024,
Extremes)
Causality
in extremes of time series.
J. Bodik, L.
Mhalla, V.
Chavez-Demoulin (2022 - project permanently on hold)
Detecting causal covariates for
extreme dependence structures
J. Bodik, Z.
Pawlas, M. Paluš (2021,
Master’s thesis; prize 10,000 Kč for best thesis)
Detection of
causality in time series using extreme values.
Reviewing
Journal of the Royal
Statistical Society, Series B (JRSSb),
Statistics
& Probability Letters,
Behaviormetrika,
REVSTAT -
Statistical Journal,
Data Mining and
Knowledge Discovery
Selected conferences and talks
Online Causal Inference Seminar (The largest international seminar on causal inference, Online, 3.2.2026)
SANKEN+RIKEN Data science seminar (Seminar on causality between SANKEN and RIKEN organisations, University of Osaka, Osaka, Japan, 26.12.2025)
CHUV BDSC seminar (Clinical Data Science Group seminar at CHUV, Lausanne, Switzerland, 29.9.2025)
UAI 2024 (Workshop on Causal Inference in Time Series, Barcelona, Spain, 15.7.2024–19.7.2024)
Keywords: Mathematical statistics; Causal inference; Statistical inference; Probability theory; Hypothesis testing; Estimation theory; Confidence intervals; Statistical models; Causality; Counterfactuals; Treatment effects; Observational studies; Randomized experiments; Potential outcomes; Causal diagrams; Identification; Confounding; Selection bias; Propensity scores; Instrumental variables; Mediation analysis; Sensitivity analysis; Bayesian statistics; Nonparametric methods; Structural equation modeling; Granger causality; Directed acyclic graphs (DAGs); Structural causal models; Regression analysis; Time series analysis; Conditional probability; Markov chains; Estimator bias; Maximum likelihood estimation; Resampling methods; Cross-validation; Model selection; Robust statistics; Multivariate analysis; Experimental design; Statistical power; Sequential analysis; Missing data; Latent variable models; Bayesian networks; Marginalization; Causal effect heterogeneity; Principal component analysis; Survival analysis; Time-to-event data; Nonlinear regression; Longitudinal data analysis; Factor analysis; Structural equation models; Bootstrap methods; Spatial statistics; Cluster analysis; Decision trees; Dimensionality reduction; Bayesian hierarchical models; Statistical learning; Machine learning; Classification methods; Regression analysis; Propensity score matching; Network analysis; Granger causality testing; Observational data analysis; Quasi-experimental designs; Robust causal inference; Counterfactual estimation; Time series modeling; Structural equation modeling; Instrumental variable regression; Causal mediation analysis; Nonignorable missing data; Multiple imputation; Factorial designs; Cluster randomized trials; Survival analysis; Propensity-based weighting; Propensity score calibration; Sensitivity analysis; Nonparametric causal inference; Random forests; Causal discovery algorithms; Generalized linear models; Bayesian model averaging; Longitudinal causal inference; Structural causal models; Neural networks; Deep learning; Artificial neural networks; Feedforward networks; Backpropagation; Activation functions; Convolutional neural networks (CNN); Recurrent neural networks (RNN); Long Short-Term Memory (LSTM); Gated Recurrent Units (GRU); Autoencoders; Generative adversarial networks (GAN); Transfer learning; Fine-tuning; Dropout regularization; Batch normalization; Gradient descent; Stochastic gradient descent (SGD); Mini-batch gradient descent; Learning rate; Momentum; Adaptive learning rate methods (e.g., Adam, RMSprop); Hyperparameters; Overfitting; Underfitting; Model capacity; Weight initialization; Convergence criteria; Loss functions; Optimizers.