I'm experiencing an issue with my web scraper running on a Linux server where it fails to locate elements using Selenium WebDriver. The scraper works perfectly fine on my local machine, but when I run it on the Linux server, it throws a NoSuchElementException.
The error message I receive is:
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".Crom_table__p1iZz"}
I've tried using WebDriverWait to wait for the table to load, but it doesn't seem to be working. Additionally, I've logged the page source on the Linux server and compared it to my local machine, and I've noticed significant differences in the HTML structure.
I am using a custom user-agent to prevent differences, but that hasn't resolved the issue. I'm not sure what else to try to resolve this issue.
Here's a snippet of my code:
def team_defenses(season):
url = 'https://www.nba.com/stats/teams/defense?Season=' + season
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()), options=get_driver_options())
driver.get(url)
# Wait for the table to load, if it doesn't load display content
try:
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'Crom_table__p1iZz')))
except Exception as e:
print(f"Error: {e}")
with open('error_page_source.txt', 'w') as f:
f.write(driver.page_source)
driver.quit()
return
table = driver.find_element(By.CLASS_NAME, 'Crom_table__p1iZz')
# ... rest of my code ...
driver.quit()
def get_driver_options():
options = Options()
options.add_argument('--headless') # Runs Chrome in headless mode.
options.add_argument('--no-sandbox') # Bypass OS security model
options.add_argument('--disable-dev-shm-usage') # Overcome limited resource problems
return options