ALGORITHMIC TRANSPARENCY AND MARKETING EFFECTIVENESS
Abstract
Algorithmic systems increasingly govern digital marketing activities, shaping content visibility, targeted advertising, and personalized recommendations. While prior research has emphasized algorithmic accuracy and performance optimization, limited attention has been given to how algorithmic transparency influences consumer responses. This study examines the role of perceived algorithmic transparency in shaping trust, perceived fairness, and marketing effectiveness outcomes. Drawing on trust theory, fairness heuristic theory, and algorithm aversion literature, we propose and test a conceptual framework linking transparency cues to trust in platforms and brands, fairness perceptions, consumer engagement, and purchase intention. The findings suggest that algorithmic transparency enhances perceived fairness and trust, which in turn positively affect engagement and purchase intention. However, excessive transparency regarding data use practices may activate privacy concerns, attenuating positive effects. The results contribute to emerging research on algorithmic governance in marketing by reframing transparency as both a trust-building mechanism and a strategic trade-off. Managerially, the study highlights the importance of designing transparent algorithmic communication strategies that balance clarity with privacy sensitivity to sustain long-term marketing effectiveness.
References
Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal in algorithmic accountability. New Media & Society, 20(3), 973–989.
https://doi.org/10.1177/1461444816676645
Awad, N. F., & Krishnan, M. S. (2006). The personalization privacy paradox: An empirical evaluation of information transparency and the willingness to be profiled online. MIS Quarterly, 30(1), 13–28.
https://doi.org/10.2307/25148715
Buell, R. W., Kim, T., & Tsay, C.-J. (2017). Creating reciprocal value through operational transparency. Management Science, 63(6), 1673–1695.
https://doi.org/10.1287/mnsc.2016.2473
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825.
https://doi.org/10.1177/0022243719851788
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.
https://doi.org/10.1037/xge0000033
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion. Journal of Experimental Psychology: General, 147(4), 635–655.
https://doi.org/10.1037/xge0000393
Dinev, T., & Hart, P. (2006). An extended privacy calculus model for e-commerce transactions. Information Systems Research, 17(1), 61–80.
https://doi.org/10.1287/isre.1060.0080
Gefen, D., Karahanna, E., & Straub, D. W. (2003). Trust and TAM in online shopping: An integrated model. MIS Quarterly, 27(1), 51–90.
https://doi.org/10.2307/30036519
Grimmelikhuijsen, S., & Meijer, A. (2014). Effects of transparency on the perceived trustworthiness of a government organization. Journal of Public Administration Research and Theory, 24(1), 137–157.
https://doi.org/10.1093/jopart/mus048
Kizilcec, R. F. (2016). How much information? Effects of transparency on trust in an algorithmic interface. Proceedings of CHI 2016, 2390–2395.
https://doi.org/10.1145/2858036.2858402
Lambrecht, A., & Tucker, C. (2013). When does retargeting work? Journal of Marketing Research, 50(5), 561–576.
https://doi.org/10.1509/jmr.11.0503
Martin, K., & Murphy, P. (2017). The role of data privacy in marketing. Journal of the Academy of Marketing Science, 45(2), 135–155.
https://doi.org/10.1007/s11747-016-0495-4
Newell, S., & Marabelli, M. (2015). Strategic opportunities (and challenges) of algorithmic decision-making. MIS Quarterly Executive, 14(4), 215–228.
Rai, A. (2020). Explainable AI: From black box to glass box. Journal of the Academy of Marketing Science, 48(1), 137–141.
https://doi.org/10.1007/s11747-019-00710-5
Shin, D. (2020). The effects of explainability and causability on trust in AI systems. Telematics and Informatics, 49, 101389.
https://doi.org/10.1016/j.tele.2020.101389
Wang, W., & Benbasat, I. (2007). Recommendation agents for electronic commerce. Journal of Management Information Systems, 23(2), 217–246.
https://doi.org/10.2753/MIS0742-1222230208
Zhu, H., & Chang, Y. (2020). Understanding algorithmic fairness perceptions. Computers in Human Behavior, 110, 106405.