Legal Matters
More operators say they will explore productivity-boosting technology this year. But a recent LegalDive article warns operators to be careful. The legal community anticipates a rise in selection cases involving hiring decisions due to AI Bias.

Where AI picks one person over another based on a number of variables, plaintiffs could make a claim on the basis of age, race or gender. Amazon’s issue with its AI hiring tool serves as a cautionary tale. The Amazon example shows how biases were inadvertently encoded into AI systems.
The training data fed into Amazon’s system consisted primarily of resumes submitted to Amazon over a 10-year period. The system learned patterns from the historical data. It reflected male dominance in technical positions at Amazon and in the broader tech industry. Consequently, when AI encountered resumes with words or phrases typically associated with female applicants, the system downgraded those resumes. Thus, the system was more likely to recommend male candidates over female candidates, even if they had similar or better qualifications.
Allowing consumers to interact directly with AI tools in the hospitality industry may introduce other potential legal risks. Chatbots, recommendation systems, or virtual assistants often require the collection and processing of sensitive consumer data, such as personal preferences, payment information, and booking history. If AI tools deliver inaccurate or misleading information to consumers, or where businesses fail to properly secure data collected by AI or comply with data protection regulations, they can face significant legal exposure.
