Sync from development - prepare for v0.2.0

This commit is contained in:
Omni
2025-12-06 20:09:55 +00:00
parent fe14e4ecfb
commit ce969eba1b
277 changed files with 14059 additions and 3899 deletions

View File

@@ -1,5 +1,95 @@
# Jackify Changelog # Jackify Changelog
## v0.2.0 - Modlist Gallery, OAuth Authentication & Performance Improvements
**Release Date:** 2025-12-06
### Major Features
#### Modlist Selection Gallery
Complete overhaul of modlist selection (First pass):
**Core Features:**
- Card-based Modlist Selection browser with modlist images, titles, authors and metadata
- Game-specific filtering automatically applied based on selected game type
- Details per card: download/install/total sizes, tags, version, badges
- Async image loading from GitHub with local 7-day caching
- Detail view with full descriptions, banner images, and external links
- Selected modlist automatically populates Install Modlist workflow
**Search and Filtering:**
- Text search across modlist names and descriptions
- Multi-select tag filtering with normalized tags
- Show Official Only, Show NSFW, Hide Unavailable toggles
- Mod search capability - find modlists containing specific Nexus mods
- Randomised card ordering
**Performance:**
- Gallery images loading from cache
- Background metadata and image preloading when Install Modlist screen opens
- Efficient rendering - cards created once, filters toggle visibility
- Non-blocking UI with concurrent image downloads
**Steam Deck Optimized:**
- Dynamic card sizing (e.g 250x270 on Steam Deck, larger on desktop)
- Responsive grid layout (up to 4 columns on large screens, 3 on Steam Deck)
- Optimized spacing and padding for 1280x800 displays
#### OAuth 2.0 Authentication
Modern authentication for Nexus Mods with secure token management:
- One-click browser-based authorization with PKCE security
- Automatic token refresh with encrypted storage
- Authorisation status indicator on Install Modlist screen
- Works in both GUI and CLI workflows
#### Compact Mode UI Redesign
Streamlined interface with dynamic window management:
- Default compact mode with optional Details view
- Activity window tab (default), across all workflow screens
- Process Monitor tab still available
- Show Details toggle for console output when needed
### Critical Fixes
### Replaced TTW Installer
- Replaced the previous TTW Installer due to complexities with its config file
#### GPU Texture Conversion (jackify-engine 0.4.0)
- Fixed GPU not being used for BC7/BC6H texture conversions
- Previous versions fell back to CPU-only despite GPU availability
- Added GPU toggle in Settings (enabled by default)
#### Winetricks Compatibility & Protontricks
- Fixed bundled winetricks path incompatibility
- Hopefully fixed winetricks in cases where it failed to download components
- For now, Jackify still defaults to bundled winetricks (Protontricks toggle in settings)
#### Steam Restart Reliability
- Enhanced Steam Restart so that is now hopefully works more reliably on all distros
- Fixed Flatpak detection blocking normal Steam start methods
### Technical Improvements
- Proton version usage clarified: Install Proton for installation/texture processing, Game Proton for shortcuts
- Centralised Steam detection in SystemInfo
- ConfigHandler refactored to always read fresh from disk
- Removed obsolete dotnet4.x code
- Enhanced Flatpak Steam compatdata detection with proper VDF parsing
### Bug Fixes
- TTW installation UI performance (batched output processing, non-blocking operations)
- Activity window animations (removed custom timers, Qt native rendering)
- Timer reset when returning from TTW screen
- Fixed bandwidth limit KB/s to bytes conversion
- Fixed AttributeError in AutomatedPrefixService.restart_steam()
### Engine Updates
- jackify-engine 0.4.0 with GPU texture conversion fixes and refactored file progress reporting
---
## v0.1.7.1 - Wine Component Verification & Flatpak Steam Fixes ## v0.1.7.1 - Wine Component Verification & Flatpak Steam Fixes
**Release Date:** November 11, 2025 **Release Date:** November 11, 2025
@@ -479,6 +569,23 @@ laf - TTW Installation function using Hoolamike application - https://github.co
- **Clean Architecture**: Removed obsolete service imports, initializations, and cleanup methods - **Clean Architecture**: Removed obsolete service imports, initializations, and cleanup methods
- **Code Quality**: Eliminated "tombstone comments" and unused service references - **Code Quality**: Eliminated "tombstone comments" and unused service references
### Deferred Features (Available in Future Release)
#### OAuth 2.0 Authentication for Nexus Mods
**Status:** Fully implemented but disabled pending Nexus Mods approval
The OAuth 2.0 authentication system has been fully developed and tested, but is temporarily disabled in v0.1.8 as we await approval from Nexus Mods for our OAuth application. The backend code remains intact and will be re-enabled immediately upon approval.
**Features (ready for deployment):**
- **Secure OAuth 2.0 + PKCE Flow**: Modern authentication to replace API key dependency
- **Encrypted Token Storage**: Tokens stored using Fernet encryption with automatic refresh
- **GUI Integration**: Clean status display on Install Modlist screen with authorize/revoke functionality
- **CLI Integration**: OAuth menu in Additional Tasks for command-line users
- **API Key Fallback**: Optional legacy API key support (configurable in Settings)
- **Unified Auth Service**: Single authentication layer supporting both OAuth and API key methods
**Current Limitation:** Awaiting Nexus approval for `jackify://oauth/callback` custom URI. Once approved, OAuth will be enabled as the primary authentication method with API key as optional fallback.
### Technical Details ### Technical Details
- **Single Shortcut Creation Path**: All workflows now use `run_working_workflow()` → `create_shortcut_with_native_service()` - **Single Shortcut Creation Path**: All workflows now use `run_working_workflow()` → `create_shortcut_with_native_service()`
- **Service Layer Cleanup**: Removed dual codepath architecture in favor of proven automated workflows - **Service Layer Cleanup**: Removed dual codepath architecture in favor of proven automated workflows

View File

@@ -5,4 +5,4 @@ This package provides both CLI and GUI interfaces for managing
Wabbajack modlists natively on Linux systems. Wabbajack modlists natively on Linux systems.
""" """
__version__ = "0.1.7.1" __version__ = "0.2.0"

View File

@@ -30,6 +30,8 @@ def _get_user_proton_version():
from jackify.backend.handlers.wine_utils import WineUtils from jackify.backend.handlers.wine_utils import WineUtils
config_handler = ConfigHandler() config_handler = ConfigHandler()
# Use Install Proton (not Game Proton) for installation/texture processing
# get_proton_path() returns the Install Proton path
user_proton_path = config_handler.get_proton_path() user_proton_path = config_handler.get_proton_path()
if user_proton_path == 'auto': if user_proton_path == 'auto':
@@ -90,15 +92,15 @@ def get_jackify_engine_path():
logger.debug(f"Using engine from environment variable: {env_engine_path}") logger.debug(f"Using engine from environment variable: {env_engine_path}")
return env_engine_path return env_engine_path
# Priority 2: PyInstaller bundle (most specific detection) # Priority 2: Frozen bundle (most specific detection)
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'): if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
# Running in a PyInstaller bundle # Running inside a frozen bundle
# Engine is expected at <bundle_root>/jackify/engine/jackify-engine # Engine is expected at <bundle_root>/jackify/engine/jackify-engine
engine_path = os.path.join(sys._MEIPASS, 'jackify', 'engine', 'jackify-engine') engine_path = os.path.join(sys._MEIPASS, 'jackify', 'engine', 'jackify-engine')
if os.path.exists(engine_path): if os.path.exists(engine_path):
return engine_path return engine_path
# Fallback: log warning but continue to other detection methods # Fallback: log warning but continue to other detection methods
logger.warning(f"PyInstaller engine not found at expected path: {engine_path}") logger.warning(f"Frozen-bundle engine not found at expected path: {engine_path}")
# Priority 3: Check if THIS process is actually running from Jackify AppImage # Priority 3: Check if THIS process is actually running from Jackify AppImage
# (not just inheriting APPDIR from another AppImage like Cursor) # (not just inheriting APPDIR from another AppImage like Cursor)
@@ -123,7 +125,7 @@ def get_jackify_engine_path():
# If all else fails, log error and return the source path anyway # If all else fails, log error and return the source path anyway
logger.error(f"jackify-engine not found in any expected location. Tried:") logger.error(f"jackify-engine not found in any expected location. Tried:")
logger.error(f" PyInstaller: {getattr(sys, '_MEIPASS', 'N/A')}/jackify/engine/jackify-engine") logger.error(f" Frozen bundle: {getattr(sys, '_MEIPASS', 'N/A')}/jackify/engine/jackify-engine")
logger.error(f" AppImage: {appdir or 'N/A'}/opt/jackify/engine/jackify-engine") logger.error(f" AppImage: {appdir or 'N/A'}/opt/jackify/engine/jackify-engine")
logger.error(f" Source: {engine_path}") logger.error(f" Source: {engine_path}")
logger.error("This will likely cause installation failures.") logger.error("This will likely cause installation failures.")
@@ -481,53 +483,76 @@ class ModlistInstallCLI:
self.context['download_dir'] = download_dir_path self.context['download_dir'] = download_dir_path
self.logger.debug(f"Download directory context set to: {self.context['download_dir']}") self.logger.debug(f"Download directory context set to: {self.context['download_dir']}")
# 5. Prompt for Nexus API key (skip if in context and valid) # 5. Get Nexus authentication (OAuth or API key)
if 'nexus_api_key' not in self.context or not self.context.get('nexus_api_key'): if 'nexus_api_key' not in self.context or not self.context.get('nexus_api_key'):
from jackify.backend.services.api_key_service import APIKeyService from jackify.backend.services.nexus_auth_service import NexusAuthService
api_key_service = APIKeyService() auth_service = NexusAuthService()
saved_key = api_key_service.get_saved_api_key()
api_key = None
if saved_key:
print("\n" + "-" * 28)
print(f"{COLOR_INFO}A Nexus API Key is already saved.{COLOR_RESET}")
use_saved = input(f"{COLOR_PROMPT}Use the saved API key? [Y/n]: {COLOR_RESET}").strip().lower()
if use_saved in ('', 'y', 'yes'):
api_key = saved_key
else:
new_key = input(f"{COLOR_PROMPT}Enter a new Nexus API Key (or press Enter to keep the saved one): {COLOR_RESET}").strip()
if new_key:
api_key = new_key
replace = input(f"{COLOR_PROMPT}Replace the saved key with this one? [y/N]: {COLOR_RESET}").strip().lower()
if replace == 'y':
if api_key_service.save_api_key(api_key):
print(f"{COLOR_SUCCESS}API key saved successfully.{COLOR_RESET}")
else:
print(f"{COLOR_WARNING}Failed to save API key. Using for this session only.{COLOR_RESET}")
else:
print(f"{COLOR_INFO}Using new key for this session only. Saved key unchanged.{COLOR_RESET}")
else:
api_key = saved_key
else:
print("\n" + "-" * 28)
print(f"{COLOR_INFO}A Nexus Mods API key is required for downloading mods.{COLOR_RESET}")
print(f"{COLOR_INFO}You can get your personal key at: {COLOR_SELECTION}https://www.nexusmods.com/users/myaccount?tab=api{COLOR_RESET}")
print(f"{COLOR_WARNING}Your API Key is NOT saved locally. It is used only for this session unless you choose to save it.{COLOR_RESET}")
api_key = input(f"{COLOR_PROMPT}Enter Nexus API Key (or 'q' to cancel): {COLOR_RESET}").strip()
if not api_key or api_key.lower() == 'q':
self.logger.info("User cancelled or provided no API key.")
return None
save = input(f"{COLOR_PROMPT}Would you like to save this API key for future use? [y/N]: {COLOR_RESET}").strip().lower()
if save == 'y':
if api_key_service.save_api_key(api_key):
print(f"{COLOR_SUCCESS}API key saved successfully.{COLOR_RESET}")
else:
print(f"{COLOR_WARNING}Failed to save API key. Using for this session only.{COLOR_RESET}")
else:
print(f"{COLOR_INFO}Using API key for this session only. It will not be saved.{COLOR_RESET}")
# Set the API key in context regardless of which path was taken # Get current auth status
authenticated, method, username = auth_service.get_auth_status()
if authenticated:
# Already authenticated - use existing auth
if method == 'oauth':
print("\n" + "-" * 28)
print(f"{COLOR_SUCCESS}Nexus Authentication: Authorized via OAuth{COLOR_RESET}")
if username:
print(f"{COLOR_INFO}Logged in as: {username}{COLOR_RESET}")
elif method == 'api_key':
print("\n" + "-" * 28)
print(f"{COLOR_INFO}Nexus Authentication: Using API Key (Legacy){COLOR_RESET}")
# Get valid token/key
api_key = auth_service.ensure_valid_auth()
if api_key:
self.context['nexus_api_key'] = api_key self.context['nexus_api_key'] = api_key
self.logger.debug(f"NEXUS_API_KEY is set in environment for engine (presence check).") else:
# Auth expired or invalid - prompt to set up
print(f"\n{COLOR_WARNING}Your authentication has expired or is invalid.{COLOR_RESET}")
authenticated = False
if not authenticated:
# Not authenticated - offer to set up OAuth
print("\n" + "-" * 28)
print(f"{COLOR_WARNING}Nexus Mods authentication is required for downloading mods.{COLOR_RESET}")
print(f"\n{COLOR_PROMPT}Would you like to authorize with Nexus now?{COLOR_RESET}")
print(f"{COLOR_INFO}This will open your browser for secure OAuth authorization.{COLOR_RESET}")
authorize = input(f"{COLOR_PROMPT}Authorize now? [Y/n]: {COLOR_RESET}").strip().lower()
if authorize in ('', 'y', 'yes'):
# Launch OAuth authorization
print(f"\n{COLOR_INFO}Starting OAuth authorization...{COLOR_RESET}")
print(f"{COLOR_WARNING}Your browser will open shortly.{COLOR_RESET}")
print(f"{COLOR_INFO}Note: You may see a security warning about a self-signed certificate.{COLOR_RESET}")
print(f"{COLOR_INFO}This is normal - click 'Advanced' and 'Proceed' to continue.{COLOR_RESET}")
def show_message(msg):
print(f"\n{COLOR_INFO}{msg}{COLOR_RESET}")
success = auth_service.authorize_oauth(show_browser_message_callback=show_message)
if success:
print(f"\n{COLOR_SUCCESS}OAuth authorization successful!{COLOR_RESET}")
_, _, username = auth_service.get_auth_status()
if username:
print(f"{COLOR_INFO}Authorized as: {username}{COLOR_RESET}")
api_key = auth_service.ensure_valid_auth()
if api_key:
self.context['nexus_api_key'] = api_key
else:
print(f"{COLOR_ERROR}Failed to retrieve auth token after authorization.{COLOR_RESET}")
return None
else:
print(f"\n{COLOR_ERROR}OAuth authorization failed.{COLOR_RESET}")
return None
else:
# User declined OAuth - cancelled
print(f"\n{COLOR_INFO}Authorization required to proceed. Installation cancelled.{COLOR_RESET}")
self.logger.info("User declined Nexus authorization.")
return None
self.logger.debug(f"Nexus authentication configured for engine.")
# Display summary and confirm # Display summary and confirm
self._display_summary() # Ensure this method exists or implement it self._display_summary() # Ensure this method exists or implement it
@@ -623,10 +648,22 @@ class ModlistInstallCLI:
download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool) download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool)
print(f"Download Directory: {download_dir_display}") print(f"Download Directory: {download_dir_display}")
if self.context.get('nexus_api_key'): # Show authentication method
print(f"Nexus API Key: [SET]") from jackify.backend.services.nexus_auth_service import NexusAuthService
auth_service = NexusAuthService()
authenticated, method, username = auth_service.get_auth_status()
if method == 'oauth':
auth_display = f"Nexus Authentication: OAuth"
if username:
auth_display += f" ({username})"
elif method == 'api_key':
auth_display = "Nexus Authentication: API Key (Legacy)"
else: else:
print(f"Nexus API Key: [NOT SET - WILL LIKELY FAIL]") # Should never reach here since we validate auth before getting to summary
auth_display = "Nexus Authentication: Unknown"
print(auth_display)
print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}") print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}")
def configuration_phase(self): def configuration_phase(self):
@@ -719,7 +756,7 @@ class ModlistInstallCLI:
# --- End Patch --- # --- End Patch ---
# Build command # Build command
cmd = [engine_path, 'install'] cmd = [engine_path, 'install', '--show-file-progress']
# Determine if this is a local .wabbajack file or an online modlist # Determine if this is a local .wabbajack file or an online modlist
modlist_value = self.context.get('modlist_value') modlist_value = self.context.get('modlist_value')
if modlist_value and modlist_value.endswith('.wabbajack') and os.path.isfile(modlist_value): if modlist_value and modlist_value.endswith('.wabbajack') and os.path.isfile(modlist_value):
@@ -738,6 +775,12 @@ class ModlistInstallCLI:
cmd.append('--debug') cmd.append('--debug')
self.logger.info("Adding --debug flag to jackify-engine") self.logger.info("Adding --debug flag to jackify-engine")
# Check GPU setting and add --no-gpu flag if disabled
gpu_enabled = config_handler.get('enable_gpu_texture_conversion', True)
if not gpu_enabled:
cmd.append('--no-gpu')
self.logger.info("GPU texture conversion disabled - adding --no-gpu flag to jackify-engine")
# Store original environment values to restore later # Store original environment values to restore later
original_env_values = { original_env_values = {
'NEXUS_API_KEY': os.environ.get('NEXUS_API_KEY'), 'NEXUS_API_KEY': os.environ.get('NEXUS_API_KEY'),
@@ -771,9 +814,11 @@ class ModlistInstallCLI:
else: else:
self.logger.warning(f"File descriptor limit: {message}") self.logger.warning(f"File descriptor limit: {message}")
# Popen now inherits the modified os.environ because env=None # Use cleaned environment to prevent AppImage variable inheritance
from jackify.backend.handlers.subprocess_utils import get_clean_subprocess_env
clean_env = get_clean_subprocess_env()
# Store process reference for cleanup # Store process reference for cleanup
self._current_process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=False, env=None, cwd=engine_dir) self._current_process = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=False, env=clean_env, cwd=engine_dir)
proc = self._current_process proc = self._current_process
# Read output in binary mode to properly handle carriage returns # Read output in binary mode to properly handle carriage returns
@@ -1513,8 +1558,20 @@ class ModlistInstallCLI:
download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool) download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool)
print(f"Download Directory: {download_dir_display}") print(f"Download Directory: {download_dir_display}")
if self.context.get('nexus_api_key'): # Show authentication method
print(f"Nexus API Key: [SET]") from jackify.backend.services.nexus_auth_service import NexusAuthService
auth_service = NexusAuthService()
authenticated, method, username = auth_service.get_auth_status()
if method == 'oauth':
auth_display = f"Nexus Authentication: OAuth"
if username:
auth_display += f" ({username})"
elif method == 'api_key':
auth_display = "Nexus Authentication: API Key (Legacy)"
else: else:
print(f"Nexus API Key: [NOT SET - WILL LIKELY FAIL]") # Should never reach here since we validate auth before getting to summary
auth_display = "Nexus Authentication: Unknown"
print(auth_display)
print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}") print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}")

View File

@@ -11,7 +11,9 @@ import logging
import shutil import shutil
import re import re
import base64 import base64
import hashlib
from pathlib import Path from pathlib import Path
from typing import Optional
# Initialize logger # Initialize logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -40,7 +42,7 @@ class ConfigHandler:
self.config_dir = os.path.expanduser("~/.config/jackify") self.config_dir = os.path.expanduser("~/.config/jackify")
self.config_file = os.path.join(self.config_dir, "config.json") self.config_file = os.path.join(self.config_dir, "config.json")
self.settings = { self.settings = {
"version": "0.0.5", "version": "0.2.0",
"last_selected_modlist": None, "last_selected_modlist": None,
"steam_libraries": [], "steam_libraries": [],
"resolution": None, "resolution": None,
@@ -52,13 +54,20 @@ class ConfigHandler:
"modlist_install_base_dir": os.path.expanduser("~/Games"), # Configurable base directory for modlist installations "modlist_install_base_dir": os.path.expanduser("~/Games"), # Configurable base directory for modlist installations
"modlist_downloads_base_dir": os.path.expanduser("~/Games/Modlist_Downloads"), # Configurable base directory for downloads "modlist_downloads_base_dir": os.path.expanduser("~/Games/Modlist_Downloads"), # Configurable base directory for downloads
"jackify_data_dir": None, # Configurable Jackify data directory (default: ~/Jackify) "jackify_data_dir": None, # Configurable Jackify data directory (default: ~/Jackify)
"use_winetricks_for_components": True, # True = use winetricks (faster), False = use protontricks for all (legacy) "use_winetricks_for_components": True, # DEPRECATED: Migrated to component_installation_method. Kept for backward compatibility.
"game_proton_path": None # Proton version for game shortcuts (can be any Proton 9+), separate from install proton "component_installation_method": "winetricks", # "winetricks" (default) or "system_protontricks"
"game_proton_path": None, # Proton version for game shortcuts (can be any Proton 9+), separate from install proton
"steam_restart_strategy": "jackify", # "jackify" (default) or "nak_simple"
"window_width": None, # Saved window width (None = use dynamic sizing)
"window_height": None # Saved window height (None = use dynamic sizing)
} }
# Load configuration if exists # Load configuration if exists
self._load_config() self._load_config()
# Perform version migrations
self._migrate_config()
# If steam_path is not set, detect it # If steam_path is not set, detect it
if not self.settings["steam_path"]: if not self.settings["steam_path"]:
self.settings["steam_path"] = self._detect_steam_path() self.settings["steam_path"] = self._detect_steam_path()
@@ -115,7 +124,10 @@ class ConfigHandler:
return None return None
def _load_config(self): def _load_config(self):
"""Load configuration from file""" """
Load configuration from file and update in-memory cache.
For legacy compatibility with initialization code.
"""
try: try:
if os.path.exists(self.config_file): if os.path.exists(self.config_file):
with open(self.config_file, 'r') as f: with open(self.config_file, 'r') as f:
@@ -129,6 +141,78 @@ class ConfigHandler:
except Exception as e: except Exception as e:
logger.error(f"Error loading configuration: {e}") logger.error(f"Error loading configuration: {e}")
def _migrate_config(self):
"""
Migrate configuration between versions
Handles breaking changes and data format updates
"""
current_version = self.settings.get("version", "0.0.0")
target_version = "0.2.0"
if current_version == target_version:
return
logger.info(f"Migrating config from {current_version} to {target_version}")
# Migration: v0.0.x -> v0.2.0
# Encryption changed from cryptography (Fernet) to pycryptodome (AES-GCM)
# Old encrypted API keys cannot be decrypted, must be re-entered
if current_version < "0.2.0":
# Clear old encrypted credentials
if self.settings.get("nexus_api_key"):
logger.warning("Clearing saved API key due to encryption format change")
logger.warning("Please re-enter your Nexus API key in Settings")
self.settings["nexus_api_key"] = None
# Clear OAuth token file (different encryption format)
oauth_token_file = Path(self.config_dir) / "nexus-oauth.json"
if oauth_token_file.exists():
logger.warning("Clearing saved OAuth token due to encryption format change")
logger.warning("Please re-authorize with Nexus Mods")
try:
oauth_token_file.unlink()
except Exception as e:
logger.error(f"Failed to remove old OAuth token: {e}")
# Remove obsolete keys
obsolete_keys = [
"hoolamike_install_path",
"hoolamike_version",
"api_key_fallback_enabled",
"proton_version", # Display string only, path stored in proton_path
"game_proton_version" # Display string only, path stored in game_proton_path
]
removed_count = 0
for key in obsolete_keys:
if key in self.settings:
del self.settings[key]
removed_count += 1
if removed_count > 0:
logger.info(f"Removed {removed_count} obsolete config keys")
# Update version
self.settings["version"] = target_version
self.save_config()
logger.info("Config migration completed")
def _read_config_from_disk(self):
"""
Read configuration directly from disk without caching.
Returns merged config (defaults + saved values).
"""
try:
config = self.settings.copy() # Start with defaults
if os.path.exists(self.config_file):
with open(self.config_file, 'r') as f:
saved_config = json.load(f)
config.update(saved_config)
return config
except Exception as e:
logger.error(f"Error reading configuration from disk: {e}")
return self.settings.copy()
def reload_config(self): def reload_config(self):
"""Reload configuration from disk to pick up external changes""" """Reload configuration from disk to pick up external changes"""
self._load_config() self._load_config()
@@ -154,8 +238,12 @@ class ConfigHandler:
return False return False
def get(self, key, default=None): def get(self, key, default=None):
"""Get a configuration value by key""" """
return self.settings.get(key, default) Get a configuration value by key.
Always reads fresh from disk to avoid stale data.
"""
config = self._read_config_from_disk()
return config.get(key, default)
def set(self, key, value): def set(self, key, value):
"""Set a configuration value""" """Set a configuration value"""
@@ -214,9 +302,126 @@ class ConfigHandler:
"""Get the path to protontricks executable""" """Get the path to protontricks executable"""
return self.settings.get("protontricks_path") return self.settings.get("protontricks_path")
def _get_encryption_key(self) -> bytes:
"""
Generate encryption key for API key storage using same method as OAuth tokens
Returns:
Fernet-compatible encryption key
"""
import socket
import getpass
try:
hostname = socket.gethostname()
username = getpass.getuser()
# Try to get machine ID
machine_id = None
try:
with open('/etc/machine-id', 'r') as f:
machine_id = f.read().strip()
except:
try:
with open('/var/lib/dbus/machine-id', 'r') as f:
machine_id = f.read().strip()
except:
pass
if machine_id:
key_material = f"{hostname}:{username}:{machine_id}:jackify"
else:
key_material = f"{hostname}:{username}:jackify"
except Exception as e:
logger.warning(f"Failed to get machine info for encryption: {e}")
key_material = "jackify:default:key"
# Generate Fernet-compatible key
key_bytes = hashlib.sha256(key_material.encode('utf-8')).digest()
return base64.urlsafe_b64encode(key_bytes)
def _encrypt_api_key(self, api_key: str) -> str:
"""
Encrypt API key using AES-GCM
Args:
api_key: Plain text API key
Returns:
Encrypted API key string
"""
try:
from Crypto.Cipher import AES
from Crypto.Random import get_random_bytes
# Derive 32-byte AES key
key = base64.urlsafe_b64decode(self._get_encryption_key())
# Generate random nonce
nonce = get_random_bytes(12)
# Encrypt with AES-GCM
cipher = AES.new(key, AES.MODE_GCM, nonce=nonce)
ciphertext, tag = cipher.encrypt_and_digest(api_key.encode('utf-8'))
# Combine and encode
combined = nonce + ciphertext + tag
return base64.b64encode(combined).decode('utf-8')
except ImportError:
# Fallback to base64 if pycryptodome not available
logger.warning("pycryptodome not available, using base64 encoding (less secure)")
return base64.b64encode(api_key.encode('utf-8')).decode('utf-8')
except Exception as e:
logger.error(f"Error encrypting API key: {e}")
return ""
def _decrypt_api_key(self, encrypted_key: str) -> Optional[str]:
"""
Decrypt API key using AES-GCM
Args:
encrypted_key: Encrypted API key string
Returns:
Decrypted API key or None on failure
"""
try:
from Crypto.Cipher import AES
# Derive 32-byte AES key
key = base64.urlsafe_b64decode(self._get_encryption_key())
# Decode and split
combined = base64.b64decode(encrypted_key.encode('utf-8'))
nonce = combined[:12]
tag = combined[-16:]
ciphertext = combined[12:-16]
# Decrypt with AES-GCM
cipher = AES.new(key, AES.MODE_GCM, nonce=nonce)
plaintext = cipher.decrypt_and_verify(ciphertext, tag)
return plaintext.decode('utf-8')
except ImportError:
# Fallback to base64 decode
try:
return base64.b64decode(encrypted_key.encode('utf-8')).decode('utf-8')
except:
return None
except Exception as e:
# Might be old base64-only format, try decoding
try:
return base64.b64decode(encrypted_key.encode('utf-8')).decode('utf-8')
except:
logger.error(f"Error decrypting API key: {e}")
return None
def save_api_key(self, api_key): def save_api_key(self, api_key):
""" """
Save Nexus API key with base64 encoding Save Nexus API key with Fernet encryption
Args: Args:
api_key (str): Plain text API key api_key (str): Plain text API key
@@ -226,36 +431,49 @@ class ConfigHandler:
""" """
try: try:
if api_key: if api_key:
# Encode the API key using base64 # Encrypt the API key using Fernet
encoded_key = base64.b64encode(api_key.encode('utf-8')).decode('utf-8') encrypted_key = self._encrypt_api_key(api_key)
self.settings["nexus_api_key"] = encoded_key if not encrypted_key:
logger.debug("API key saved successfully") logger.error("Failed to encrypt API key")
return False
self.settings["nexus_api_key"] = encrypted_key
logger.debug("API key encrypted and saved successfully")
else: else:
# Clear the API key if empty # Clear the API key if empty
self.settings["nexus_api_key"] = None self.settings["nexus_api_key"] = None
logger.debug("API key cleared") logger.debug("API key cleared")
return self.save_config() result = self.save_config()
# Set restrictive permissions on config file
if result:
try:
os.chmod(self.config_file, 0o600)
except Exception as e:
logger.warning(f"Could not set restrictive permissions on config: {e}")
return result
except Exception as e: except Exception as e:
logger.error(f"Error saving API key: {e}") logger.error(f"Error saving API key: {e}")
return False return False
def get_api_key(self): def get_api_key(self):
""" """
Retrieve and decode the saved Nexus API key Retrieve and decrypt the saved Nexus API key.
Always reads fresh from disk to pick up changes from other instances Always reads fresh from disk.
Returns: Returns:
str: Decoded API key or None if not saved str: Decrypted API key or None if not saved
""" """
try: try:
# Reload config from disk to pick up changes from Settings dialog config = self._read_config_from_disk()
self._load_config() encrypted_key = config.get("nexus_api_key")
encoded_key = self.settings.get("nexus_api_key") if encrypted_key:
if encoded_key: # Decrypt the API key
# Decode the base64 encoded key decrypted_key = self._decrypt_api_key(encrypted_key)
decoded_key = base64.b64decode(encoded_key.encode('utf-8')).decode('utf-8') return decrypted_key
return decoded_key
return None return None
except Exception as e: except Exception as e:
logger.error(f"Error retrieving API key: {e}") logger.error(f"Error retrieving API key: {e}")
@@ -263,15 +481,14 @@ class ConfigHandler:
def has_saved_api_key(self): def has_saved_api_key(self):
""" """
Check if an API key is saved in configuration Check if an API key is saved in configuration.
Always reads fresh from disk to pick up changes from other instances Always reads fresh from disk.
Returns: Returns:
bool: True if API key exists, False otherwise bool: True if API key exists, False otherwise
""" """
# Reload config from disk to pick up changes from Settings dialog config = self._read_config_from_disk()
self._load_config() return config.get("nexus_api_key") is not None
return self.settings.get("nexus_api_key") is not None
def clear_api_key(self): def clear_api_key(self):
""" """
@@ -519,16 +736,15 @@ class ConfigHandler:
def get_proton_path(self): def get_proton_path(self):
""" """
Retrieve the saved Install Proton path from configuration (for jackify-engine) Retrieve the saved Install Proton path from configuration (for jackify-engine).
Always reads fresh from disk to pick up changes from Settings dialog Always reads fresh from disk.
Returns: Returns:
str: Saved Install Proton path or 'auto' if not saved str: Saved Install Proton path or 'auto' if not saved
""" """
try: try:
# Reload config from disk to pick up changes from Settings dialog config = self._read_config_from_disk()
self._load_config() proton_path = config.get("proton_path", "auto")
proton_path = self.settings.get("proton_path", "auto")
logger.debug(f"Retrieved fresh install proton_path from config: {proton_path}") logger.debug(f"Retrieved fresh install proton_path from config: {proton_path}")
return proton_path return proton_path
except Exception as e: except Exception as e:
@@ -537,21 +753,20 @@ class ConfigHandler:
def get_game_proton_path(self): def get_game_proton_path(self):
""" """
Retrieve the saved Game Proton path from configuration (for game shortcuts) Retrieve the saved Game Proton path from configuration (for game shortcuts).
Falls back to install Proton path if game Proton not set Falls back to install Proton path if game Proton not set.
Always reads fresh from disk to pick up changes from Settings dialog Always reads fresh from disk.
Returns: Returns:
str: Saved Game Proton path, Install Proton path, or 'auto' if not saved str: Saved Game Proton path, Install Proton path, or 'auto' if not saved
""" """
try: try:
# Reload config from disk to pick up changes from Settings dialog config = self._read_config_from_disk()
self._load_config() game_proton_path = config.get("game_proton_path")
game_proton_path = self.settings.get("game_proton_path")
# If game proton not set or set to same_as_install, use install proton # If game proton not set or set to same_as_install, use install proton
if not game_proton_path or game_proton_path == "same_as_install": if not game_proton_path or game_proton_path == "same_as_install":
game_proton_path = self.settings.get("proton_path", "auto") game_proton_path = config.get("proton_path", "auto")
logger.debug(f"Retrieved fresh game proton_path from config: {game_proton_path}") logger.debug(f"Retrieved fresh game proton_path from config: {game_proton_path}")
return game_proton_path return game_proton_path
@@ -561,16 +776,15 @@ class ConfigHandler:
def get_proton_version(self): def get_proton_version(self):
""" """
Retrieve the saved Proton version from configuration Retrieve the saved Proton version from configuration.
Always reads fresh from disk to pick up changes from Settings dialog Always reads fresh from disk.
Returns: Returns:
str: Saved Proton version or 'auto' if not saved str: Saved Proton version or 'auto' if not saved
""" """
try: try:
# Reload config from disk to pick up changes from Settings dialog config = self._read_config_from_disk()
self._load_config() proton_version = config.get("proton_version", "auto")
proton_version = self.settings.get("proton_version", "auto")
logger.debug(f"Retrieved fresh proton_version from config: {proton_version}") logger.debug(f"Retrieved fresh proton_version from config: {proton_version}")
return proton_version return proton_version
except Exception as e: except Exception as e:

File diff suppressed because it is too large Load Diff

View File

@@ -863,60 +863,6 @@ class MenuHandler:
self.logger.debug("_clear_screen: Clearing screen for POSIX by printing 100 newlines.") self.logger.debug("_clear_screen: Clearing screen for POSIX by printing 100 newlines.")
print("\n" * 100, flush=True) print("\n" * 100, flush=True)
def show_hoolamike_menu(self, cli_instance):
"""Show the Hoolamike Modlist Management menu"""
if not hasattr(cli_instance, 'hoolamike_handler') or cli_instance.hoolamike_handler is None:
try:
from .hoolamike_handler import HoolamikeHandler
cli_instance.hoolamike_handler = HoolamikeHandler(
steamdeck=getattr(cli_instance, 'steamdeck', False),
verbose=getattr(cli_instance, 'verbose', False),
filesystem_handler=getattr(cli_instance, 'filesystem_handler', None),
config_handler=getattr(cli_instance, 'config_handler', None),
menu_handler=self
)
except Exception as e:
self.logger.error(f"Failed to initialize Hoolamike features: {e}", exc_info=True)
print(f"{COLOR_ERROR}Error: Failed to initialize Hoolamike features. Check logs.{COLOR_RESET}")
input("\nPress Enter to return to the main menu...")
return # Exit this menu if handler fails
while True:
self._clear_screen()
# Banner display handled by frontend
# Use print_section_header for consistency if available, otherwise manual with COLOR_SELECTION
if hasattr(self, 'print_section_header'): # Check if method exists (it's from ui_utils)
print_section_header("Hoolamike Modlist Management")
else: # Fallback if not imported or available directly on self
print(f"{COLOR_SELECTION}Hoolamike Modlist Management{COLOR_RESET}")
print(f"{COLOR_SELECTION}{'-'*30}{COLOR_RESET}")
print(f"{COLOR_SELECTION}1.{COLOR_RESET} Install or Update Hoolamike App")
print(f"{COLOR_SELECTION}2.{COLOR_RESET} Install Modlist (Nexus Premium)")
print(f"{COLOR_SELECTION}3.{COLOR_RESET} Install Modlist (Non-Premium) {COLOR_DISABLED}(Not Implemented){COLOR_RESET}")
print(f"{COLOR_SELECTION}4.{COLOR_RESET} Install Tale of Two Wastelands (TTW)")
print(f"{COLOR_SELECTION}5.{COLOR_RESET} Edit Hoolamike Configuration")
print(f"{COLOR_SELECTION}0.{COLOR_RESET} Return to Main Menu")
selection = input(f"\n{COLOR_PROMPT}Enter your selection (0-5): {COLOR_RESET}").strip()
if selection.lower() == 'q': # Allow 'q' to re-display menu
continue
if selection == "1":
cli_instance.hoolamike_handler.install_update_hoolamike()
elif selection == "2":
cli_instance.hoolamike_handler.install_modlist(premium=True)
elif selection == "3":
print(f"{COLOR_INFO}Install Modlist (Non-Premium) is not yet implemented.{COLOR_RESET}")
input("\nPress Enter to return to the Hoolamike menu...")
elif selection == "4":
cli_instance.hoolamike_handler.install_ttw()
elif selection == "5":
cli_instance.hoolamike_handler.edit_hoolamike_config()
elif selection == "0":
break
else:
print("Invalid selection. Please try again.")
time.sleep(1)

View File

@@ -571,6 +571,7 @@ class ModlistHandler:
status_callback (callable, optional): A function to call with status updates during configuration. status_callback (callable, optional): A function to call with status updates during configuration.
manual_steps_completed (bool): If True, skip the manual steps prompt (used for new modlist flow). manual_steps_completed (bool): If True, skip the manual steps prompt (used for new modlist flow).
""" """
try:
# Store status_callback for Configuration Summary # Store status_callback for Configuration Summary
self._current_status_callback = status_callback self._current_status_callback = status_callback
@@ -581,6 +582,9 @@ class ModlistHandler:
self.logger.error("Cannot execute configuration steps: Missing required context (modlist_dir, appid, game_var, steamdeck status).") self.logger.error("Cannot execute configuration steps: Missing required context (modlist_dir, appid, game_var, steamdeck status).")
print("Error: Missing required information to start configuration.") print("Error: Missing required information to start configuration.")
return False return False
except Exception as e:
self.logger.error(f"Exception in _execute_configuration_steps initialization: {e}", exc_info=True)
return False
# Step 1: Set protontricks permissions # Step 1: Set protontricks permissions
if status_callback: if status_callback:
@@ -706,15 +710,18 @@ class ModlistHandler:
target_appid = self.appid target_appid = self.appid
# Use user's preferred component installation method (respects settings toggle) # Use user's preferred component installation method (respects settings toggle)
self.logger.debug(f"Getting WINEPREFIX for AppID {target_appid}...")
wineprefix = self.protontricks_handler.get_wine_prefix_path(target_appid) wineprefix = self.protontricks_handler.get_wine_prefix_path(target_appid)
if not wineprefix: if not wineprefix:
self.logger.error("Failed to get WINEPREFIX path for component installation.") self.logger.error("Failed to get WINEPREFIX path for component installation.")
print("Error: Could not determine wine prefix location.") print("Error: Could not determine wine prefix location.")
return False return False
self.logger.debug(f"WINEPREFIX obtained: {wineprefix}")
# Use the winetricks handler which respects the user's toggle setting # Use the winetricks handler which respects the user's toggle setting
try: try:
self.logger.info("Installing Wine components using user's preferred method...") self.logger.info("Installing Wine components using user's preferred method...")
self.logger.debug(f"Calling winetricks_handler.install_wine_components with wineprefix={wineprefix}, game_var={self.game_var_full}, components={components}")
success = self.winetricks_handler.install_wine_components(wineprefix, self.game_var_full, specific_components=components) success = self.winetricks_handler.install_wine_components(wineprefix, self.game_var_full, specific_components=components)
if success: if success:
self.logger.info("Wine component installation completed successfully") self.logger.info("Wine component installation completed successfully")
@@ -920,16 +927,25 @@ class ModlistHandler:
if self.steam_library and self.game_var_full: if self.steam_library and self.game_var_full:
vanilla_game_dir = str(Path(self.steam_library) / "steamapps" / "common" / self.game_var_full) vanilla_game_dir = str(Path(self.steam_library) / "steamapps" / "common" / self.game_var_full)
if not self.path_handler.create_dxvk_conf( dxvk_created = self.path_handler.create_dxvk_conf(
modlist_dir=self.modlist_dir, modlist_dir=self.modlist_dir,
modlist_sdcard=self.modlist_sdcard, modlist_sdcard=self.modlist_sdcard,
steam_library=str(self.steam_library) if self.steam_library else None, # Pass as string or None steam_library=str(self.steam_library) if self.steam_library else None, # Pass as string or None
basegame_sdcard=self.basegame_sdcard, basegame_sdcard=self.basegame_sdcard,
game_var_full=self.game_var_full, game_var_full=self.game_var_full,
vanilla_game_dir=vanilla_game_dir vanilla_game_dir=vanilla_game_dir,
): stock_game_path=self.stock_game_path
self.logger.warning("Failed to create dxvk.conf file.") )
print("Warning: Failed to create dxvk.conf file.") dxvk_verified = self.path_handler.verify_dxvk_conf_exists(
modlist_dir=self.modlist_dir,
steam_library=str(self.steam_library) if self.steam_library else None,
game_var_full=self.game_var_full,
vanilla_game_dir=vanilla_game_dir,
stock_game_path=self.stock_game_path
)
if not dxvk_created or not dxvk_verified:
self.logger.warning("DXVK configuration file is missing or incomplete after post-install steps.")
print("Warning: Failed to verify dxvk.conf file (required for AMD GPUs).")
self.logger.info("Step 10: Creating dxvk.conf... Done") self.logger.info("Step 10: Creating dxvk.conf... Done")
# Step 11a: Small Tasks - Delete Incompatible Plugins # Step 11a: Small Tasks - Delete Incompatible Plugins

View File

@@ -49,7 +49,7 @@ logger = logging.getLogger(__name__) # Standard logger init
# Helper function to get path to jackify-install-engine # Helper function to get path to jackify-install-engine
def get_jackify_engine_path(): def get_jackify_engine_path():
if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'): if getattr(sys, 'frozen', False) and hasattr(sys, '_MEIPASS'):
# Running in a PyInstaller bundle # Running inside the bundled AppImage (frozen)
# Engine is expected at <bundle_root>/jackify/engine/jackify-engine # Engine is expected at <bundle_root>/jackify/engine/jackify-engine
return os.path.join(sys._MEIPASS, 'jackify', 'engine', 'jackify-engine') return os.path.join(sys._MEIPASS, 'jackify', 'engine', 'jackify-engine')
else: else:
@@ -408,51 +408,76 @@ class ModlistInstallCLI:
self.context['download_dir'] = download_dir_path self.context['download_dir'] = download_dir_path
self.logger.debug(f"Download directory context set to: {self.context['download_dir']}") self.logger.debug(f"Download directory context set to: {self.context['download_dir']}")
# 5. Prompt for Nexus API key (skip if in context) # 5. Get Nexus authentication (OAuth or API key)
if 'nexus_api_key' not in self.context: if 'nexus_api_key' not in self.context:
from jackify.backend.services.api_key_service import APIKeyService from jackify.backend.services.nexus_auth_service import NexusAuthService
api_key_service = APIKeyService() auth_service = NexusAuthService()
saved_key = api_key_service.get_saved_api_key()
api_key = None # Get current auth status
if saved_key: authenticated, method, username = auth_service.get_auth_status()
if authenticated:
# Already authenticated - use existing auth
if method == 'oauth':
print("\n" + "-" * 28) print("\n" + "-" * 28)
print(f"{COLOR_INFO}A Nexus API Key is already saved.{COLOR_RESET}") print(f"{COLOR_SUCCESS}Nexus Authentication: Authorized via OAuth{COLOR_RESET}")
use_saved = input(f"{COLOR_PROMPT}Use the saved API key? [Y/n]: {COLOR_RESET}").strip().lower() if username:
if use_saved in ('', 'y', 'yes'): print(f"{COLOR_INFO}Logged in as: {username}{COLOR_RESET}")
api_key = saved_key elif method == 'api_key':
else:
new_key = input(f"{COLOR_PROMPT}Enter a new Nexus API Key (or press Enter to keep the saved one): {COLOR_RESET}").strip()
if new_key:
api_key = new_key
replace = input(f"{COLOR_PROMPT}Replace the saved key with this one? [y/N]: {COLOR_RESET}").strip().lower()
if replace == 'y':
if api_key_service.save_api_key(api_key):
print(f"{COLOR_SUCCESS}API key saved successfully.{COLOR_RESET}")
else:
print(f"{COLOR_WARNING}Failed to save API key. Using for this session only.{COLOR_RESET}")
else:
print(f"{COLOR_INFO}Using new key for this session only. Saved key unchanged.{COLOR_RESET}")
else:
api_key = saved_key
else:
print("\n" + "-" * 28) print("\n" + "-" * 28)
print(f"{COLOR_INFO}A Nexus Mods API key is required for downloading mods.{COLOR_RESET}") print(f"{COLOR_INFO}Nexus Authentication: Using API Key (Legacy){COLOR_RESET}")
print(f"{COLOR_INFO}You can get your personal key at: {COLOR_SELECTION}https://www.nexusmods.com/users/myaccount?tab=api{COLOR_RESET}")
print(f"{COLOR_WARNING}Your API Key is NOT saved locally. It is used only for this session unless you choose to save it.{COLOR_RESET}") # Get valid token/key
api_key = input(f"{COLOR_PROMPT}Enter Nexus API Key (or 'q' to cancel): {COLOR_RESET}").strip() api_key = auth_service.ensure_valid_auth()
if not api_key or api_key.lower() == 'q': if api_key:
self.logger.info("User cancelled or provided no API key.")
return None
save = input(f"{COLOR_PROMPT}Would you like to save this API key for future use? [y/N]: {COLOR_RESET}").strip().lower()
if save == 'y':
if api_key_service.save_api_key(api_key):
print(f"{COLOR_SUCCESS}API key saved successfully.{COLOR_RESET}")
else:
print(f"{COLOR_WARNING}Failed to save API key. Using for this session only.{COLOR_RESET}")
else:
print(f"{COLOR_INFO}Using API key for this session only. It will not be saved.{COLOR_RESET}")
self.context['nexus_api_key'] = api_key self.context['nexus_api_key'] = api_key
self.logger.debug(f"NEXUS_API_KEY is set in environment for engine (presence check).") else:
# Auth expired or invalid - prompt to set up
print(f"\n{COLOR_WARNING}Your authentication has expired or is invalid.{COLOR_RESET}")
authenticated = False
if not authenticated:
# Not authenticated - offer to set up OAuth
print("\n" + "-" * 28)
print(f"{COLOR_WARNING}Nexus Mods authentication is required for downloading mods.{COLOR_RESET}")
print(f"\n{COLOR_PROMPT}Would you like to authorize with Nexus now?{COLOR_RESET}")
print(f"{COLOR_INFO}This will open your browser for secure OAuth authorization.{COLOR_RESET}")
authorize = input(f"{COLOR_PROMPT}Authorize now? [Y/n]: {COLOR_RESET}").strip().lower()
if authorize in ('', 'y', 'yes'):
# Launch OAuth authorization
print(f"\n{COLOR_INFO}Starting OAuth authorization...{COLOR_RESET}")
print(f"{COLOR_WARNING}Your browser will open shortly.{COLOR_RESET}")
print(f"{COLOR_INFO}Note: Your browser may ask permission to open 'xdg-open' or{COLOR_RESET}")
print(f"{COLOR_INFO}Jackify's protocol handler - please click 'Open' or 'Allow'.{COLOR_RESET}")
def show_message(msg):
print(f"\n{COLOR_INFO}{msg}{COLOR_RESET}")
success = auth_service.authorize_oauth(show_browser_message_callback=show_message)
if success:
print(f"\n{COLOR_SUCCESS}OAuth authorization successful!{COLOR_RESET}")
_, _, username = auth_service.get_auth_status()
if username:
print(f"{COLOR_INFO}Authorized as: {username}{COLOR_RESET}")
api_key = auth_service.ensure_valid_auth()
if api_key:
self.context['nexus_api_key'] = api_key
else:
print(f"{COLOR_ERROR}Failed to retrieve auth token after authorization.{COLOR_RESET}")
return None
else:
print(f"\n{COLOR_ERROR}OAuth authorization failed.{COLOR_RESET}")
return None
else:
# User declined OAuth - cancelled
print(f"\n{COLOR_INFO}Authorization required to proceed. Installation cancelled.{COLOR_RESET}")
self.logger.info("User declined Nexus authorization.")
return None
self.logger.debug(f"Nexus authentication configured for engine.")
# Display summary and confirm # Display summary and confirm
self._display_summary() # Ensure this method exists or implement it self._display_summary() # Ensure this method exists or implement it
@@ -502,10 +527,22 @@ class ModlistInstallCLI:
download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool) download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool)
print(f"Download Directory: {download_dir_display}") print(f"Download Directory: {download_dir_display}")
if self.context.get('nexus_api_key'): # Show authentication method
print(f"Nexus API Key: [SET]") from jackify.backend.services.nexus_auth_service import NexusAuthService
auth_service = NexusAuthService()
authenticated, method, username = auth_service.get_auth_status()
if method == 'oauth':
auth_display = f"Nexus Authentication: OAuth"
if username:
auth_display += f" ({username})"
elif method == 'api_key':
auth_display = "Nexus Authentication: API Key (Legacy)"
else: else:
print(f"Nexus API Key: [NOT SET - WILL LIKELY FAIL]") # Should never reach here since we validate auth before getting to summary
auth_display = "Nexus Authentication: Unknown"
print(auth_display)
print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}") print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}")
def configuration_phase(self): def configuration_phase(self):
@@ -597,7 +634,7 @@ class ModlistInstallCLI:
# --- End Patch --- # --- End Patch ---
# Build command # Build command
cmd = [engine_path, 'install'] cmd = [engine_path, 'install', '--show-file-progress']
# Check for debug mode and pass --debug to engine if needed # Check for debug mode and pass --debug to engine if needed
from jackify.backend.handlers.config_handler import ConfigHandler from jackify.backend.handlers.config_handler import ConfigHandler
@@ -607,6 +644,12 @@ class ModlistInstallCLI:
cmd.append('--debug') cmd.append('--debug')
self.logger.info("Debug mode enabled in config - passing --debug flag to jackify-engine") self.logger.info("Debug mode enabled in config - passing --debug flag to jackify-engine")
# Check GPU setting and add --no-gpu flag if disabled
gpu_enabled = config_handler.get('enable_gpu_texture_conversion', True)
if not gpu_enabled:
cmd.append('--no-gpu')
self.logger.info("GPU texture conversion disabled - passing --no-gpu flag to jackify-engine")
# Determine if this is a local .wabbajack file or an online modlist # Determine if this is a local .wabbajack file or an online modlist
modlist_value = self.context.get('modlist_value') modlist_value = self.context.get('modlist_value')
machineid = self.context.get('machineid') machineid = self.context.get('machineid')
@@ -667,8 +710,10 @@ class ModlistInstallCLI:
else: else:
self.logger.warning(f"File descriptor limit: {message}") self.logger.warning(f"File descriptor limit: {message}")
# Popen now inherits the modified os.environ because env=None # Use cleaned environment to prevent AppImage variable inheritance
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=False, env=None, cwd=engine_dir) from jackify.backend.handlers.subprocess_utils import get_clean_subprocess_env
clean_env = get_clean_subprocess_env()
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=False, env=clean_env, cwd=engine_dir)
# Start performance monitoring for the engine process # Start performance monitoring for the engine process
# Adjust monitoring based on debug mode # Adjust monitoring based on debug mode
@@ -1102,10 +1147,22 @@ class ModlistInstallCLI:
download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool) download_dir_display = download_dir_display[0] # Get the Path object from (Path, bool)
print(f"Download Directory: {download_dir_display}") print(f"Download Directory: {download_dir_display}")
if self.context.get('nexus_api_key'): # Show authentication method
print(f"Nexus API Key: [SET]") from jackify.backend.services.nexus_auth_service import NexusAuthService
auth_service = NexusAuthService()
authenticated, method, username = auth_service.get_auth_status()
if method == 'oauth':
auth_display = f"Nexus Authentication: OAuth"
if username:
auth_display += f" ({username})"
elif method == 'api_key':
auth_display = "Nexus Authentication: API Key (Legacy)"
else: else:
print(f"Nexus API Key: [NOT SET - WILL LIKELY FAIL]") # Should never reach here since we validate auth before getting to summary
auth_display = "Nexus Authentication: Unknown"
print(auth_display)
print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}") print(f"{COLOR_INFO}----------------------------------------{COLOR_RESET}")
def _enhance_nexus_error(self, line: str) -> str: def _enhance_nexus_error(self, line: str) -> str:
@@ -1234,52 +1291,65 @@ class ModlistInstallCLI:
print(f"\n{COLOR_INFO}Starting TTW installation workflow...{COLOR_RESET}") print(f"\n{COLOR_INFO}Starting TTW installation workflow...{COLOR_RESET}")
# Import TTW installation handler # Import TTW installation handler
from jackify.backend.handlers.hoolamike_handler import HoolamikeHandler from jackify.backend.handlers.ttw_installer_handler import TTWInstallerHandler
from jackify.backend.models.configuration import SystemInfo from jackify.backend.models.configuration import SystemInfo
from pathlib import Path
system_info = SystemInfo() system_info = SystemInfo()
hoolamike_handler = HoolamikeHandler(system_info) ttw_installer_handler = TTWInstallerHandler(
steamdeck=system_info.is_steamdeck if hasattr(system_info, 'is_steamdeck') else False,
verbose=self.verbose if hasattr(self, 'verbose') else False,
filesystem_handler=self.filesystem_handler if hasattr(self, 'filesystem_handler') else None,
config_handler=self.config_handler if hasattr(self, 'config_handler') else None
)
# Check if Hoolamike is installed # Check if TTW_Linux_Installer is installed
is_installed, installed_version = hoolamike_handler.check_installation_status() ttw_installer_handler._check_installation()
if not is_installed: if not ttw_installer_handler.ttw_installer_installed:
print(f"{COLOR_INFO}Hoolamike (TTW installer) is not installed.{COLOR_RESET}") print(f"{COLOR_INFO}TTW_Linux_Installer is not installed.{COLOR_RESET}")
user_input = input(f"{COLOR_PROMPT}Install Hoolamike? (yes/no): {COLOR_RESET}").strip().lower() user_input = input(f"{COLOR_PROMPT}Install TTW_Linux_Installer? (yes/no): {COLOR_RESET}").strip().lower()
if user_input not in ['yes', 'y']: if user_input not in ['yes', 'y']:
print(f"{COLOR_INFO}TTW installation cancelled.{COLOR_RESET}") print(f"{COLOR_INFO}TTW installation cancelled.{COLOR_RESET}")
return return
# Install Hoolamike # Install TTW_Linux_Installer
print(f"{COLOR_INFO}Installing Hoolamike...{COLOR_RESET}") print(f"{COLOR_INFO}Installing TTW_Linux_Installer...{COLOR_RESET}")
success, message = hoolamike_handler.install_hoolamike() success, message = ttw_installer_handler.install_ttw_installer()
if not success: if not success:
print(f"{COLOR_ERROR}Failed to install Hoolamike: {message}{COLOR_RESET}") print(f"{COLOR_ERROR}Failed to install TTW_Linux_Installer: {message}{COLOR_RESET}")
return return
print(f"{COLOR_INFO}Hoolamike installed successfully.{COLOR_RESET}") print(f"{COLOR_INFO}TTW_Linux_Installer installed successfully.{COLOR_RESET}")
# Get Hoolamike MPI path # Prompt for TTW .mpi file
mpi_path = hoolamike_handler.get_mpi_path() print(f"\n{COLOR_PROMPT}TTW Installer File (.mpi){COLOR_RESET}")
if not mpi_path or not os.path.exists(mpi_path): mpi_path = input(f"{COLOR_PROMPT}Path to TTW .mpi file: {COLOR_RESET}").strip()
print(f"{COLOR_ERROR}Hoolamike MPI file not found at: {mpi_path}{COLOR_RESET}") if not mpi_path:
print(f"{COLOR_WARNING}No .mpi file specified. Cancelling.{COLOR_RESET}")
return
mpi_path = Path(mpi_path).expanduser()
if not mpi_path.exists() or not mpi_path.is_file():
print(f"{COLOR_ERROR}TTW .mpi file not found: {mpi_path}{COLOR_RESET}")
return return
# Prompt for TTW installation directory # Prompt for TTW installation directory
print(f"\n{COLOR_PROMPT}TTW Installation Directory{COLOR_RESET}") print(f"\n{COLOR_PROMPT}TTW Installation Directory{COLOR_RESET}")
print(f"Default: {os.path.join(install_dir, 'TTW')}") default_ttw_dir = os.path.join(install_dir, 'TTW')
print(f"Default: {default_ttw_dir}")
ttw_install_dir = input(f"{COLOR_PROMPT}TTW install directory (Enter for default): {COLOR_RESET}").strip() ttw_install_dir = input(f"{COLOR_PROMPT}TTW install directory (Enter for default): {COLOR_RESET}").strip()
if not ttw_install_dir: if not ttw_install_dir:
ttw_install_dir = os.path.join(install_dir, "TTW") ttw_install_dir = default_ttw_dir
# Run Hoolamike installation # Run TTW installation
print(f"\n{COLOR_INFO}Installing TTW using Hoolamike...{COLOR_RESET}") print(f"\n{COLOR_INFO}Installing TTW using TTW_Linux_Installer...{COLOR_RESET}")
print(f"{COLOR_INFO}This may take a while (15-30 minutes depending on your system).{COLOR_RESET}") print(f"{COLOR_INFO}This may take a while (15-30 minutes depending on your system).{COLOR_RESET}")
success = hoolamike_handler.run_hoolamike_install(mpi_path, ttw_install_dir) success, message = ttw_installer_handler.install_ttw_backend(Path(mpi_path), Path(ttw_install_dir))
if success: if success:
print(f"\n{COLOR_INFO}═══════════════════════════════════════════════════════════════{COLOR_RESET}") print(f"\n{COLOR_INFO}═══════════════════════════════════════════════════════════════{COLOR_RESET}")
@@ -1289,6 +1359,7 @@ class ModlistInstallCLI:
print(f"The modlist '{modlist_name}' is now ready to use with TTW.") print(f"The modlist '{modlist_name}' is now ready to use with TTW.")
else: else:
print(f"\n{COLOR_ERROR}TTW installation failed. Check the logs for details.{COLOR_RESET}") print(f"\n{COLOR_ERROR}TTW installation failed. Check the logs for details.{COLOR_RESET}")
print(f"{COLOR_ERROR}Error: {message}{COLOR_RESET}")
except Exception as e: except Exception as e:
self.logger.error(f"Error during TTW installation: {e}", exc_info=True) self.logger.error(f"Error during TTW installation: {e}", exc_info=True)

View File

@@ -0,0 +1,434 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
OAuth Token Handler
Handles encrypted storage and retrieval of OAuth tokens
"""
import os
import json
import base64
import hashlib
import logging
import time
from typing import Optional, Dict
from pathlib import Path
logger = logging.getLogger(__name__)
class OAuthTokenHandler:
"""
Handles OAuth token storage with simple encryption
Stores tokens in ~/.config/jackify/nexus-oauth.json
"""
def __init__(self, config_dir: Optional[str] = None):
"""
Initialize token handler
Args:
config_dir: Optional custom config directory (defaults to ~/.config/jackify)
"""
if config_dir:
self.config_dir = Path(config_dir)
else:
self.config_dir = Path.home() / ".config" / "jackify"
self.token_file = self.config_dir / "nexus-oauth.json"
# Ensure config directory exists
self.config_dir.mkdir(parents=True, exist_ok=True)
# Generate encryption key based on machine-specific data
self._encryption_key = self._generate_encryption_key()
def _generate_encryption_key(self) -> bytes:
"""
Generate encryption key based on machine-specific data using Fernet
Uses hostname + username + machine ID as key material, similar to DPAPI approach.
This provides proper symmetric encryption while remaining machine-specific.
Returns:
Fernet-compatible 32-byte encryption key
"""
import socket
import getpass
try:
hostname = socket.gethostname()
username = getpass.getuser()
# Try to get machine ID for additional entropy
machine_id = None
try:
# Linux machine-id
with open('/etc/machine-id', 'r') as f:
machine_id = f.read().strip()
except:
try:
# Alternative locations
with open('/var/lib/dbus/machine-id', 'r') as f:
machine_id = f.read().strip()
except:
pass
# Combine multiple sources of machine-specific data
if machine_id:
key_material = f"{hostname}:{username}:{machine_id}:jackify"
else:
key_material = f"{hostname}:{username}:jackify"
except Exception as e:
logger.warning(f"Failed to get machine info for encryption: {e}")
key_material = "jackify:default:key"
# Generate 32-byte key using SHA256 for Fernet
# Fernet requires base64-encoded 32-byte key
key_bytes = hashlib.sha256(key_material.encode('utf-8')).digest()
return base64.urlsafe_b64encode(key_bytes)
def _encrypt_data(self, data: str) -> str:
"""
Encrypt data using AES-GCM (authenticated encryption)
Uses pycryptodome for cross-platform compatibility.
AES-GCM provides authenticated encryption similar to Fernet.
Args:
data: Plain text data
Returns:
Encrypted data as base64 string (nonce:ciphertext:tag format)
"""
try:
from Crypto.Cipher import AES
from Crypto.Random import get_random_bytes
# Derive 32-byte AES key from encryption_key (which is base64-encoded)
key = base64.urlsafe_b64decode(self._encryption_key)
# Generate random nonce (12 bytes for GCM)
nonce = get_random_bytes(12)
# Create AES-GCM cipher
cipher = AES.new(key, AES.MODE_GCM, nonce=nonce)
# Encrypt and get authentication tag
data_bytes = data.encode('utf-8')
ciphertext, tag = cipher.encrypt_and_digest(data_bytes)
# Combine nonce:ciphertext:tag and base64 encode
combined = nonce + ciphertext + tag
return base64.b64encode(combined).decode('utf-8')
except ImportError:
logger.error("pycryptodome package not available for token encryption")
return ""
except Exception as e:
logger.error(f"Failed to encrypt data: {e}")
return ""
def _decrypt_data(self, encrypted_data: str) -> Optional[str]:
"""
Decrypt data using AES-GCM (authenticated encryption)
Args:
encrypted_data: Encrypted data string (base64-encoded nonce:ciphertext:tag)
Returns:
Decrypted plain text or None on failure
"""
try:
from Crypto.Cipher import AES
# Derive 32-byte AES key from encryption_key
key = base64.urlsafe_b64decode(self._encryption_key)
# Decode base64 and split nonce:ciphertext:tag
combined = base64.b64decode(encrypted_data.encode('utf-8'))
nonce = combined[:12]
tag = combined[-16:]
ciphertext = combined[12:-16]
# Create AES-GCM cipher
cipher = AES.new(key, AES.MODE_GCM, nonce=nonce)
# Decrypt and verify authentication tag
plaintext = cipher.decrypt_and_verify(ciphertext, tag)
return plaintext.decode('utf-8')
except ImportError:
logger.error("pycryptodome package not available for token decryption")
return None
except Exception as e:
logger.error(f"Failed to decrypt data: {e}")
return None
def save_token(self, token_data: Dict) -> bool:
"""
Save OAuth token to encrypted file with proper permissions
Args:
token_data: Token data dict from OAuth response
Returns:
True if saved successfully
"""
try:
# Add timestamp for tracking
token_data['_saved_at'] = int(time.time())
# Convert to JSON
json_data = json.dumps(token_data, indent=2)
# Encrypt using Fernet
encrypted = self._encrypt_data(json_data)
if not encrypted:
logger.error("Encryption failed, cannot save token")
return False
# Save to file with restricted permissions
# Write to temp file first, then move (atomic operation)
import tempfile
fd, temp_path = tempfile.mkstemp(dir=self.config_dir, prefix='.oauth_tmp_')
try:
with os.fdopen(fd, 'w') as f:
json.dump({'encrypted_data': encrypted}, f, indent=2)
# Set restrictive permissions (owner read/write only)
os.chmod(temp_path, 0o600)
# Atomic move
os.replace(temp_path, self.token_file)
logger.info(f"Saved encrypted OAuth token to {self.token_file}")
return True
except Exception as e:
# Clean up temp file on error
try:
os.unlink(temp_path)
except:
pass
raise e
except Exception as e:
logger.error(f"Failed to save OAuth token: {e}")
return False
def load_token(self) -> Optional[Dict]:
"""
Load OAuth token from encrypted file
Returns:
Token data dict or None if not found or invalid
"""
if not self.token_file.exists():
logger.debug("No OAuth token file found")
return None
try:
# Load encrypted data
with open(self.token_file, 'r') as f:
data = json.load(f)
encrypted = data.get('encrypted_data')
if not encrypted:
logger.error("Token file missing encrypted_data field")
return None
# Decrypt
decrypted = self._decrypt_data(encrypted)
if not decrypted:
logger.error("Failed to decrypt token data")
return None
# Parse JSON
token_data = json.loads(decrypted)
logger.debug("Successfully loaded OAuth token")
return token_data
except json.JSONDecodeError as e:
logger.error(f"Token file contains invalid JSON: {e}")
return None
except Exception as e:
logger.error(f"Failed to load OAuth token: {e}")
return None
def delete_token(self) -> bool:
"""
Delete OAuth token file
Returns:
True if deleted successfully
"""
try:
if self.token_file.exists():
self.token_file.unlink()
logger.info("Deleted OAuth token file")
return True
else:
logger.debug("No OAuth token file to delete")
return False
except Exception as e:
logger.error(f"Failed to delete OAuth token: {e}")
return False
def has_token(self) -> bool:
"""
Check if OAuth token file exists
Returns:
True if token file exists
"""
return self.token_file.exists()
def is_token_expired(self, token_data: Optional[Dict] = None, buffer_minutes: int = 5) -> bool:
"""
Check if token is expired or close to expiring
Args:
token_data: Optional token data dict (loads from file if not provided)
buffer_minutes: Minutes before expiry to consider token expired (default 5)
Returns:
True if token is expired or will expire within buffer_minutes
"""
if token_data is None:
token_data = self.load_token()
if not token_data:
return True
# Extract OAuth data if nested
oauth_data = token_data.get('oauth', token_data)
# Get expiry information
expires_in = oauth_data.get('expires_in')
saved_at = token_data.get('_saved_at')
if not expires_in or not saved_at:
logger.debug("Token missing expiry information, assuming valid")
return False # Assume token is valid if no expiry info
# Calculate expiry time
expires_at = saved_at + expires_in
buffer_seconds = buffer_minutes * 60
now = int(time.time())
# Check if expired or within buffer
is_expired = (expires_at - buffer_seconds) < now
if is_expired:
remaining = expires_at - now
if remaining < 0:
logger.debug(f"Token expired {-remaining} seconds ago")
else:
logger.debug(f"Token expires in {remaining} seconds (within buffer)")
return is_expired
def get_access_token(self) -> Optional[str]:
"""
Get access token from storage
Returns:
Access token string or None if not found or expired
"""
token_data = self.load_token()
if not token_data:
return None
# Check if expired
if self.is_token_expired(token_data):
logger.debug("Stored token is expired")
return None
# Extract access token from OAuth structure
oauth_data = token_data.get('oauth', token_data)
access_token = oauth_data.get('access_token')
if not access_token:
logger.error("Token data missing access_token field")
return None
return access_token
def get_refresh_token(self) -> Optional[str]:
"""
Get refresh token from storage
Returns:
Refresh token string or None if not found
"""
token_data = self.load_token()
if not token_data:
return None
# Extract refresh token from OAuth structure
oauth_data = token_data.get('oauth', token_data)
refresh_token = oauth_data.get('refresh_token')
return refresh_token
def get_token_info(self) -> Dict:
"""
Get diagnostic information about current token
Returns:
Dict with token status information
"""
token_data = self.load_token()
if not token_data:
return {
'has_token': False,
'error': 'No token file found'
}
oauth_data = token_data.get('oauth', token_data)
expires_in = oauth_data.get('expires_in')
saved_at = token_data.get('_saved_at')
# Check if refresh token is likely expired (30 days since last auth)
# Nexus doesn't provide refresh token expiry, so we estimate conservatively
REFRESH_TOKEN_LIFETIME_DAYS = 30
now = int(time.time())
refresh_token_age_days = (now - saved_at) / 86400 if saved_at else 0
refresh_token_likely_expired = refresh_token_age_days > REFRESH_TOKEN_LIFETIME_DAYS
if expires_in and saved_at:
expires_at = saved_at + expires_in
remaining_seconds = expires_at - now
return {
'has_token': True,
'has_refresh_token': bool(oauth_data.get('refresh_token')),
'expires_in_seconds': remaining_seconds,
'expires_in_minutes': remaining_seconds / 60,
'expires_in_hours': remaining_seconds / 3600,
'is_expired': remaining_seconds < 0,
'expires_soon_5min': remaining_seconds < 300,
'expires_soon_15min': remaining_seconds < 900,
'saved_at': saved_at,
'expires_at': expires_at,
'refresh_token_age_days': refresh_token_age_days,
'refresh_token_likely_expired': refresh_token_likely_expired,
}
else:
return {
'has_token': True,
'has_refresh_token': bool(oauth_data.get('refresh_token')),
'refresh_token_age_days': refresh_token_age_days,
'refresh_token_likely_expired': refresh_token_likely_expired,
'error': 'Token missing expiry information'
}

View File

@@ -12,6 +12,7 @@ import shutil
from pathlib import Path from pathlib import Path
from typing import Optional, Union, Dict, Any, List, Tuple from typing import Optional, Union, Dict, Any, List, Tuple
from datetime import datetime from datetime import datetime
import vdf
# Initialize logger # Initialize logger
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -258,7 +259,7 @@ class PathHandler:
return False return False
@staticmethod @staticmethod
def create_dxvk_conf(modlist_dir, modlist_sdcard, steam_library, basegame_sdcard, game_var_full, vanilla_game_dir=None): def create_dxvk_conf(modlist_dir, modlist_sdcard, steam_library, basegame_sdcard, game_var_full, vanilla_game_dir=None, stock_game_path=None):
""" """
Create dxvk.conf file in the appropriate location Create dxvk.conf file in the appropriate location
@@ -269,6 +270,7 @@ class PathHandler:
basegame_sdcard (bool): Whether the base game is on an SD card basegame_sdcard (bool): Whether the base game is on an SD card
game_var_full (str): Full name of the game (e.g., "Skyrim Special Edition") game_var_full (str): Full name of the game (e.g., "Skyrim Special Edition")
vanilla_game_dir (str): Optional path to vanilla game directory for fallback vanilla_game_dir (str): Optional path to vanilla game directory for fallback
stock_game_path (str): Direct path to detected stock game directory (if available)
Returns: Returns:
bool: True on success, False on failure bool: True on success, False on failure
@@ -276,49 +278,45 @@ class PathHandler:
try: try:
logger.info("Creating dxvk.conf file...") logger.info("Creating dxvk.conf file...")
# Determine the location for dxvk.conf candidate_dirs = PathHandler._build_dxvk_candidate_dirs(
dxvk_conf_path = None modlist_dir=modlist_dir,
stock_game_path=stock_game_path,
steam_library=steam_library,
game_var_full=game_var_full,
vanilla_game_dir=vanilla_game_dir
)
# Check for common stock game directories first, then vanilla as fallback if not candidate_dirs:
stock_game_paths = [ logger.error("Could not determine location for dxvk.conf (no candidate directories found)")
os.path.join(modlist_dir, "Stock Game"), return False
os.path.join(modlist_dir, "Game Root"),
os.path.join(modlist_dir, "STOCK GAME"),
os.path.join(modlist_dir, "Stock Game Folder"),
os.path.join(modlist_dir, "Stock Folder"),
os.path.join(modlist_dir, "Skyrim Stock"),
os.path.join(modlist_dir, "root", "Skyrim Special Edition")
]
# Add vanilla game directory as fallback if steam_library and game_var_full are provided target_dir = None
if steam_library and game_var_full: for directory in candidate_dirs:
stock_game_paths.append(os.path.join(steam_library, "steamapps", "common", game_var_full)) if directory.is_dir():
target_dir = directory
for path in stock_game_paths:
if os.path.exists(path):
dxvk_conf_path = os.path.join(path, "dxvk.conf")
break break
if not dxvk_conf_path: if target_dir is None:
# Fallback: Try vanilla game directory if provided fallback_dir = Path(modlist_dir) if modlist_dir and Path(modlist_dir).is_dir() else None
if vanilla_game_dir and os.path.exists(vanilla_game_dir): if fallback_dir:
logger.info(f"Attempting fallback to vanilla game directory: {vanilla_game_dir}") logger.warning(f"No stock/vanilla directories found; falling back to modlist directory: {fallback_dir}")
dxvk_conf_path = os.path.join(vanilla_game_dir, "dxvk.conf") target_dir = fallback_dir
logger.info(f"Using vanilla game directory for dxvk.conf: {dxvk_conf_path}")
else: else:
logger.error("Could not determine location for dxvk.conf") logger.error("All candidate directories for dxvk.conf are missing.")
return False return False
dxvk_conf_path = target_dir / "dxvk.conf"
# The required line that Jackify needs # The required line that Jackify needs
required_line = "dxvk.enableGraphicsPipelineLibrary = False" required_line = "dxvk.enableGraphicsPipelineLibrary = False"
# Check if dxvk.conf already exists # Check if dxvk.conf already exists
if os.path.exists(dxvk_conf_path): if dxvk_conf_path.exists():
logger.info(f"Found existing dxvk.conf at {dxvk_conf_path}") logger.info(f"Found existing dxvk.conf at {dxvk_conf_path}")
# Read existing content # Read existing content
try: try:
with open(dxvk_conf_path, 'r') as f: with open(dxvk_conf_path, 'r', encoding='utf-8') as f:
existing_content = f.read().strip() existing_content = f.read().strip()
# Check if our required line is already present # Check if our required line is already present
@@ -339,7 +337,7 @@ class PathHandler:
updated_content = required_line + '\n' updated_content = required_line + '\n'
logger.info("Adding required DXVK setting to empty file") logger.info("Adding required DXVK setting to empty file")
with open(dxvk_conf_path, 'w') as f: with open(dxvk_conf_path, 'w', encoding='utf-8') as f:
f.write(updated_content) f.write(updated_content)
logger.info(f"dxvk.conf updated successfully at {dxvk_conf_path}") logger.info(f"dxvk.conf updated successfully at {dxvk_conf_path}")
@@ -353,7 +351,8 @@ class PathHandler:
# Create new dxvk.conf file (original behavior) # Create new dxvk.conf file (original behavior)
dxvk_conf_content = required_line + '\n' dxvk_conf_content = required_line + '\n'
with open(dxvk_conf_path, 'w') as f: dxvk_conf_path.parent.mkdir(parents=True, exist_ok=True)
with open(dxvk_conf_path, 'w', encoding='utf-8') as f:
f.write(dxvk_conf_content) f.write(dxvk_conf_content)
logger.info(f"dxvk.conf created successfully at {dxvk_conf_path}") logger.info(f"dxvk.conf created successfully at {dxvk_conf_path}")
@@ -363,6 +362,99 @@ class PathHandler:
logger.error(f"Error creating dxvk.conf: {e}") logger.error(f"Error creating dxvk.conf: {e}")
return False return False
@staticmethod
def verify_dxvk_conf_exists(modlist_dir, steam_library, game_var_full, vanilla_game_dir=None, stock_game_path=None) -> bool:
"""
Verify that dxvk.conf exists in at least one of the candidate directories and contains the required setting.
"""
required_line = "dxvk.enableGraphicsPipelineLibrary = False"
candidate_dirs = PathHandler._build_dxvk_candidate_dirs(
modlist_dir=modlist_dir,
stock_game_path=stock_game_path,
steam_library=steam_library,
game_var_full=game_var_full,
vanilla_game_dir=vanilla_game_dir
)
for directory in candidate_dirs:
conf_path = directory / "dxvk.conf"
if conf_path.is_file():
try:
with open(conf_path, 'r', encoding='utf-8') as f:
content = f.read()
if required_line not in content:
logger.warning(f"dxvk.conf found at {conf_path} but missing required setting. Appending now.")
with open(conf_path, 'a', encoding='utf-8') as f:
if not content.endswith('\n'):
f.write('\n')
f.write(required_line + '\n')
logger.info(f"Verified dxvk.conf at {conf_path}")
return True
except Exception as e:
logger.warning(f"Failed to verify dxvk.conf at {conf_path}: {e}")
logger.warning("dxvk.conf verification failed - file not found in any candidate directory.")
return False
@staticmethod
def _normalize_common_library_path(steam_library: Optional[str]) -> Optional[Path]:
if not steam_library:
return None
path = Path(steam_library)
parts_lower = [part.lower() for part in path.parts]
if len(parts_lower) >= 2 and parts_lower[-2:] == ['steamapps', 'common']:
return path
if parts_lower and parts_lower[-1] == 'common':
return path
if 'steamapps' in parts_lower:
idx = parts_lower.index('steamapps')
truncated = Path(*path.parts[:idx + 1])
return truncated / 'common'
return path / 'steamapps' / 'common'
@staticmethod
def _build_dxvk_candidate_dirs(modlist_dir, stock_game_path, steam_library, game_var_full, vanilla_game_dir) -> List[Path]:
candidates: List[Path] = []
seen = set()
def add_candidate(path_obj: Optional[Path]):
if not path_obj:
return
key = path_obj.resolve() if path_obj.exists() else path_obj
if key in seen:
return
seen.add(key)
candidates.append(path_obj)
if stock_game_path:
add_candidate(Path(stock_game_path))
if modlist_dir:
base_path = Path(modlist_dir)
common_names = [
"Stock Game",
"Game Root",
"STOCK GAME",
"Stock Game Folder",
"Stock Folder",
"Skyrim Stock",
os.path.join("root", "Skyrim Special Edition")
]
for name in common_names:
add_candidate(base_path / name)
steam_common = PathHandler._normalize_common_library_path(steam_library)
if steam_common and game_var_full:
add_candidate(steam_common / game_var_full)
if vanilla_game_dir:
add_candidate(Path(vanilla_game_dir))
if modlist_dir:
add_candidate(Path(modlist_dir))
return candidates
@staticmethod @staticmethod
def find_steam_config_vdf() -> Optional[Path]: def find_steam_config_vdf() -> Optional[Path]:
"""Finds the active Steam config.vdf file.""" """Finds the active Steam config.vdf file."""
@@ -491,12 +583,9 @@ class PathHandler:
logger.debug(f"Searching for compatdata directory for AppID: {appid}") logger.debug(f"Searching for compatdata directory for AppID: {appid}")
# Use libraryfolders.vdf to find all Steam library paths # Use libraryfolders.vdf to find all Steam library paths, when available
library_paths = PathHandler.get_all_steam_library_paths() library_paths = PathHandler.get_all_steam_library_paths()
if not library_paths: if library_paths:
logger.error("Could not find any Steam library paths from libraryfolders.vdf")
return None
logger.debug(f"Checking compatdata in {len(library_paths)} Steam libraries") logger.debug(f"Checking compatdata in {len(library_paths)} Steam libraries")
# Check each Steam library's compatdata directory # Check each Steam library's compatdata directory
@@ -513,18 +602,34 @@ class PathHandler:
else: else:
logger.debug(f"Compatdata for AppID {appid} not found in {compatdata_base}") logger.debug(f"Compatdata for AppID {appid} not found in {compatdata_base}")
# Fallback: Broad search (can be slow, consider if needed) # Check fallback locations only if we didn't find valid libraries
# try: # If we have valid libraries from libraryfolders.vdf, we should NOT fall back to wrong locations
# logger.debug(f"Compatdata not found in standard locations, attempting wider search...") is_flatpak_steam = any('.var/app/com.valvesoftware.Steam' in str(lib) for lib in library_paths) if library_paths else False
# # This can be very slow and resource-intensive
# # find_output = subprocess.check_output(['find', '/', '-type', 'd', '-name', appid, '-path', '*/compatdata/*', '-print', '-quit', '2>/dev/null'], text=True).strip()
# # if find_output:
# # logger.info(f"Found compatdata via find command: {find_output}")
# # return Path(find_output)
# except Exception as e:
# logger.warning(f"Error during 'find' command for compatdata: {e}")
logger.warning(f"Compatdata directory for AppID {appid} not found.") if not library_paths or is_flatpak_steam:
# Only check Flatpak-specific fallbacks if we have Flatpak Steam
logger.debug("Checking fallback compatdata locations...")
if is_flatpak_steam:
# For Flatpak Steam, only check Flatpak-specific locations
fallback_locations = [
Path.home() / ".var/app/com.valvesoftware.Steam/.local/share/Steam/steamapps/compatdata",
Path.home() / ".var/app/com.valvesoftware.Steam/data/Steam/steamapps/compatdata",
]
else:
# For native Steam or unknown, check standard locations
fallback_locations = [
Path.home() / ".local/share/Steam/steamapps/compatdata",
Path.home() / ".steam/steam/steamapps/compatdata",
]
for compatdata_base in fallback_locations:
if compatdata_base.is_dir():
potential_path = compatdata_base / appid
if potential_path.is_dir():
logger.warning(f"Found compatdata directory in fallback location (may be from old incorrect creation): {potential_path}")
return potential_path
logger.warning(f"Compatdata directory for AppID {appid} not found in any Steam library or fallback location.")
return None return None
@staticmethod @staticmethod
@@ -617,14 +722,22 @@ class PathHandler:
if vdf_path.is_file(): if vdf_path.is_file():
logger.info(f"[DEBUG] Parsing libraryfolders.vdf: {vdf_path}") logger.info(f"[DEBUG] Parsing libraryfolders.vdf: {vdf_path}")
try: try:
with open(vdf_path) as f: with open(vdf_path, 'r', encoding='utf-8') as f:
for line in f: data = vdf.load(f)
m = re.search(r'"path"\s*"([^"]+)"', line) # libraryfolders.vdf structure: libraryfolders -> "0", "1", etc. -> "path"
if m: libraryfolders = data.get('libraryfolders', {})
lib_path = Path(m.group(1)) for key, lib_data in libraryfolders.items():
if isinstance(lib_data, dict) and 'path' in lib_data:
lib_path = Path(lib_data['path'])
# Resolve symlinks for consistency (mmcblk0p1 -> deck/UUID) # Resolve symlinks for consistency (mmcblk0p1 -> deck/UUID)
try:
resolved_path = lib_path.resolve() resolved_path = lib_path.resolve()
library_paths.add(resolved_path) library_paths.add(resolved_path)
logger.debug(f"[DEBUG] Found library path: {resolved_path}")
except (OSError, RuntimeError) as resolve_err:
# If resolve fails, use original path
logger.warning(f"[DEBUG] Could not resolve {lib_path}, using as-is: {resolve_err}")
library_paths.add(lib_path)
except Exception as e: except Exception as e:
logger.error(f"[DEBUG] Failed to parse {vdf_path}: {e}") logger.error(f"[DEBUG] Failed to parse {vdf_path}: {e}")
logger.info(f"[DEBUG] All detected Steam libraries: {library_paths}") logger.info(f"[DEBUG] All detected Steam libraries: {library_paths}")

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,51 @@
"""
Example usage of ProgressParser
This file demonstrates how to use the progress parser to extract
structured information from jackify-engine output.
R&D NOTE: This is experimental code for investigation purposes.
"""
from jackify.backend.handlers.progress_parser import ProgressStateManager
def example_usage():
"""Example of how to use the progress parser."""
# Create state manager
state_manager = ProgressStateManager()
# Simulate processing lines from jackify-engine output
sample_lines = [
"[00:00:00] === Installing files ===",
"[00:00:05] [12/14] Installing files (1.1GB/56.3GB)",
"[00:00:10] Installing: Enderal Remastered Armory.7z (42%)",
"[00:00:15] Extracting: Mandragora Sprouts.7z (96%)",
"[00:00:20] Downloading at 45.2MB/s",
"[00:00:25] Extracting at 267.3MB/s",
"[00:00:30] Progress: 85%",
]
print("Processing sample output lines...\n")
for line in sample_lines:
updated = state_manager.process_line(line)
if updated:
state = state_manager.get_state()
print(f"Line: {line}")
print(f" Phase: {state.phase.value} - {state.phase_name}")
print(f" Progress: {state.overall_percent:.1f}%")
print(f" Step: {state.phase_progress_text}")
print(f" Data: {state.data_progress_text}")
print(f" Active Files: {len(state.active_files)}")
for file_prog in state.active_files:
print(f" - {file_prog.filename}: {file_prog.percent:.1f}%")
print(f" Speeds: {state.speeds}")
print(f" Display: {state.display_text}")
print()
if __name__ == "__main__":
example_usage()

View File

@@ -35,35 +35,133 @@ class ProtontricksHandler:
self._native_steam_service = None self._native_steam_service = None
self.use_native_operations = True # Enable native Steam operations by default self.use_native_operations = True # Enable native Steam operations by default
def _get_steam_dir_from_libraryfolders(self) -> Optional[Path]:
"""
Determine the Steam installation directory from libraryfolders.vdf location.
This is the source of truth - we read libraryfolders.vdf to find where Steam is actually installed.
Returns:
Path to Steam installation directory (the one with config/, steamapps/, etc.) or None
"""
from ..handlers.path_handler import PathHandler
# Check all possible libraryfolders.vdf locations
vdf_paths = [
Path.home() / ".steam/steam/config/libraryfolders.vdf",
Path.home() / ".local/share/Steam/config/libraryfolders.vdf",
Path.home() / ".steam/root/config/libraryfolders.vdf",
Path.home() / ".var/app/com.valvesoftware.Steam/.local/share/Steam/config/libraryfolders.vdf", # Flatpak
Path.home() / ".var/app/com.valvesoftware.Steam/data/Steam/config/libraryfolders.vdf", # Flatpak alternative
]
for vdf_path in vdf_paths:
if vdf_path.is_file():
# The Steam installation directory is the parent of the config directory
steam_dir = vdf_path.parent.parent
# Verify it has steamapps directory (required by protontricks)
if (steam_dir / "steamapps").exists():
logger.debug(f"Determined STEAM_DIR from libraryfolders.vdf: {steam_dir}")
return steam_dir
# Fallback: try to get from library paths
library_paths = PathHandler.get_all_steam_library_paths()
if library_paths:
# For Flatpak Steam, library path is .local/share/Steam, but Steam installation might be data/Steam
first_lib = library_paths[0]
if '.var/app/com.valvesoftware.Steam' in str(first_lib):
# Check if data/Steam exists (main Flatpak Steam installation)
data_steam = Path.home() / ".var/app/com.valvesoftware.Steam/data/Steam"
if (data_steam / "steamapps").exists():
logger.debug(f"Determined STEAM_DIR from Flatpak data path: {data_steam}")
return data_steam
# Otherwise use the library path itself
if (first_lib / "steamapps").exists():
logger.debug(f"Determined STEAM_DIR from Flatpak library path: {first_lib}")
return first_lib
else:
# Native Steam - library path should be the Steam installation
if (first_lib / "steamapps").exists():
logger.debug(f"Determined STEAM_DIR from native library path: {first_lib}")
return first_lib
logger.warning("Could not determine STEAM_DIR from libraryfolders.vdf")
return None
def _get_bundled_winetricks_path(self) -> Optional[Path]:
"""
Get the path to the bundled winetricks script following AppImage best practices.
Same logic as WinetricksHandler._get_bundled_winetricks_path()
"""
possible_paths = []
# AppImage environment - use APPDIR (standard AppImage best practice)
if os.environ.get('APPDIR'):
appdir_path = Path(os.environ['APPDIR']) / 'opt' / 'jackify' / 'tools' / 'winetricks'
possible_paths.append(appdir_path)
# Development environment - relative to module location
module_dir = Path(__file__).parent.parent.parent # Go from handlers/ up to jackify/
dev_path = module_dir / 'tools' / 'winetricks'
possible_paths.append(dev_path)
# Try each path until we find one that works
for path in possible_paths:
if path.exists() and os.access(path, os.X_OK):
logger.debug(f"Found bundled winetricks at: {path}")
return path
logger.warning(f"Bundled winetricks not found. Tried paths: {possible_paths}")
return None
def _get_bundled_cabextract_path(self) -> Optional[Path]:
"""
Get the path to the bundled cabextract binary following AppImage best practices.
Same logic as WinetricksHandler._get_bundled_cabextract()
"""
possible_paths = []
# AppImage environment - use APPDIR (standard AppImage best practice)
if os.environ.get('APPDIR'):
appdir_path = Path(os.environ['APPDIR']) / 'opt' / 'jackify' / 'tools' / 'cabextract'
possible_paths.append(appdir_path)
# Development environment - relative to module location
module_dir = Path(__file__).parent.parent.parent # Go from handlers/ up to jackify/
dev_path = module_dir / 'tools' / 'cabextract'
possible_paths.append(dev_path)
# Try each path until we find one that works
for path in possible_paths:
if path.exists() and os.access(path, os.X_OK):
logger.debug(f"Found bundled cabextract at: {path}")
return path
logger.warning(f"Bundled cabextract not found. Tried paths: {possible_paths}")
return None
def _get_clean_subprocess_env(self): def _get_clean_subprocess_env(self):
""" """
Create a clean environment for subprocess calls by removing PyInstaller-specific Create a clean environment for subprocess calls by removing bundle-specific
environment variables that can interfere with external program execution. environment variables that can interfere with external program execution.
Uses the centralized get_clean_subprocess_env() to ensure AppImage variables
are removed to prevent subprocess spawning issues.
Returns: Returns:
dict: Cleaned environment dictionary dict: Cleaned environment dictionary
""" """
env = os.environ.copy() # Use centralized function that removes AppImage variables
from .subprocess_utils import get_clean_subprocess_env
env = get_clean_subprocess_env()
# Remove PyInstaller-specific environment variables # Clean library path variables that frozen bundles modify (Linux/Unix)
env.pop('_MEIPASS', None)
env.pop('_MEIPASS2', None)
# Clean library path variables that PyInstaller modifies (Linux/Unix)
if 'LD_LIBRARY_PATH_ORIG' in env: if 'LD_LIBRARY_PATH_ORIG' in env:
# Restore original LD_LIBRARY_PATH if it was backed up by PyInstaller # Restore original LD_LIBRARY_PATH if it was backed up by the bundler
env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH_ORIG'] env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH_ORIG']
else: else:
# Remove PyInstaller-modified LD_LIBRARY_PATH # Remove bundle-modified LD_LIBRARY_PATH
env.pop('LD_LIBRARY_PATH', None) env.pop('LD_LIBRARY_PATH', None)
# Clean PATH of PyInstaller-specific entries
if 'PATH' in env and hasattr(sys, '_MEIPASS'):
path_entries = env['PATH'].split(os.pathsep)
# Remove any PATH entries that point to PyInstaller temp directory
cleaned_path = [p for p in path_entries if not p.startswith(sys._MEIPASS)]
env['PATH'] = os.pathsep.join(cleaned_path)
# Clean macOS library path (if present) # Clean macOS library path (if present)
if 'DYLD_LIBRARY_PATH' in env and hasattr(sys, '_MEIPASS'): if 'DYLD_LIBRARY_PATH' in env and hasattr(sys, '_MEIPASS'):
dyld_entries = env['DYLD_LIBRARY_PATH'].split(os.pathsep) dyld_entries = env['DYLD_LIBRARY_PATH'].split(os.pathsep)
@@ -84,16 +182,16 @@ class ProtontricksHandler:
def detect_protontricks(self): def detect_protontricks(self):
""" """
Detect if protontricks is installed and whether it's flatpak or native. Detect if protontricks is installed (silent detection for GUI/automated use).
If not found, prompts the user to install the Flatpak version.
Returns True if protontricks is found or successfully installed, False otherwise Returns True if protontricks is found, False otherwise.
Does NOT prompt user or attempt installation - that's handled by the GUI.
""" """
logger.debug("Detecting if protontricks is installed...") logger.debug("Detecting if protontricks is installed...")
# Check if protontricks exists as a command # Check if protontricks exists as a command
protontricks_path_which = shutil.which("protontricks") protontricks_path_which = shutil.which("protontricks")
self.flatpak_path = shutil.which("flatpak") # Store for later use self.flatpak_path = shutil.which("flatpak")
if protontricks_path_which: if protontricks_path_which:
# Check if it's a flatpak wrapper # Check if it's a flatpak wrapper
@@ -103,7 +201,6 @@ class ProtontricksHandler:
if "flatpak run" in content: if "flatpak run" in content:
logger.debug(f"Detected Protontricks is a Flatpak wrapper at {protontricks_path_which}") logger.debug(f"Detected Protontricks is a Flatpak wrapper at {protontricks_path_which}")
self.which_protontricks = 'flatpak' self.which_protontricks = 'flatpak'
# Continue to check flatpak list just to be sure
else: else:
logger.info(f"Native Protontricks found at {protontricks_path_which}") logger.info(f"Native Protontricks found at {protontricks_path_which}")
self.which_protontricks = 'native' self.which_protontricks = 'native'
@@ -112,102 +209,26 @@ class ProtontricksHandler:
except Exception as e: except Exception as e:
logger.error(f"Error reading protontricks executable: {e}") logger.error(f"Error reading protontricks executable: {e}")
# Check if flatpak protontricks is installed (or if wrapper check indicated flatpak) # Check if flatpak protontricks is installed
flatpak_installed = False
try: try:
# PyInstaller fix: Comprehensive environment cleaning for subprocess calls
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
result = subprocess.run( result = subprocess.run(
["flatpak", "list"], ["flatpak", "list"],
stdout=subprocess.PIPE, capture_output=True,
stderr=subprocess.DEVNULL, # Suppress stderr to avoid error messages when flatpak not installed
text=True, text=True,
env=env # Use comprehensively cleaned environment env=env
) )
if result.returncode == 0 and "com.github.Matoking.protontricks" in result.stdout: if result.returncode == 0 and "com.github.Matoking.protontricks" in result.stdout:
logger.info("Flatpak Protontricks is installed") logger.info("Flatpak Protontricks is installed")
self.which_protontricks = 'flatpak' self.which_protontricks = 'flatpak'
flatpak_installed = True
return True return True
except FileNotFoundError: except FileNotFoundError:
logger.warning("'flatpak' command not found. Cannot check for Flatpak Protontricks.") logger.warning("'flatpak' command not found. Cannot check for Flatpak Protontricks.")
except Exception as e: except Exception as e:
logger.error(f"Unexpected error checking flatpak: {e}") logger.error(f"Unexpected error checking flatpak: {e}")
# If neither native nor flatpak found, prompt for installation # Not found
if not self.which_protontricks:
logger.warning("Protontricks not found (native or flatpak).") logger.warning("Protontricks not found (native or flatpak).")
should_install = False
if self.steamdeck:
logger.info("Running on Steam Deck, attempting automatic Flatpak installation.")
# Maybe add a brief pause or message?
print("Protontricks not found. Attempting automatic installation via Flatpak...")
should_install = True
else:
try:
print("\nProtontricks not found. Choose installation method:")
print("1. Install via Flatpak (automatic)")
print("2. Install via native package manager (manual)")
print("3. Skip (Use bundled winetricks instead)")
choice = input("Enter choice (1/2/3): ").strip()
if choice == '1' or choice == '':
should_install = True
elif choice == '2':
print("\nTo install protontricks via your system package manager:")
print("• Ubuntu/Debian: sudo apt install protontricks")
print("• Fedora: sudo dnf install protontricks")
print("• Arch Linux: sudo pacman -S protontricks")
print("• openSUSE: sudo zypper install protontricks")
print("\nAfter installation, please rerun Jackify.")
return False
elif choice == '3':
print("Skipping protontricks installation. Will use bundled winetricks for component installation.")
logger.info("User chose to skip protontricks and use winetricks fallback")
return False
else:
print("Invalid choice. Installation cancelled.")
return False
except KeyboardInterrupt:
print("\nInstallation cancelled.")
return False
if should_install:
try:
logger.info("Attempting to install Flatpak Protontricks...")
# Use --noninteractive for automatic install where applicable
install_cmd = ["flatpak", "install", "-u", "-y", "--noninteractive", "flathub", "com.github.Matoking.protontricks"]
# PyInstaller fix: Comprehensive environment cleaning for subprocess calls
env = self._get_clean_subprocess_env()
# Run with output visible to user
process = subprocess.run(install_cmd, check=True, text=True, env=env)
logger.info("Flatpak Protontricks installation successful.")
print("Flatpak Protontricks installed successfully.")
self.which_protontricks = 'flatpak'
return True
except FileNotFoundError:
logger.error("'flatpak' command not found. Cannot install.")
print("Error: 'flatpak' command not found. Please install Flatpak first.")
return False
except subprocess.CalledProcessError as e:
logger.error(f"Flatpak installation failed: {e}")
print(f"Error: Flatpak installation failed (Command: {' '.join(e.cmd)}). Please try installing manually.")
return False
except Exception as e:
logger.error(f"Unexpected error during Flatpak installation: {e}")
print("An unexpected error occurred during installation.")
return False
else:
logger.error("User chose not to install Protontricks or installation skipped.")
print("Protontricks installation skipped. Cannot continue without Protontricks.")
return False
# Should not reach here if logic is correct, but acts as a fallback
logger.error("Protontricks detection failed unexpectedly.")
return False return False
def check_protontricks_version(self): def check_protontricks_version(self):
@@ -257,11 +278,28 @@ class ProtontricksHandler:
logger.error("Could not detect protontricks installation") logger.error("Could not detect protontricks installation")
return None return None
if self.which_protontricks == 'flatpak': # Build command based on detected protontricks type
cmd = ["flatpak", "run", "com.github.Matoking.protontricks"] if self.which_protontricks == 'bundled':
else: # CRITICAL: Use safe Python executable to prevent AppImage recursive spawning
cmd = ["protontricks"] from .subprocess_utils import get_safe_python_executable
python_exe = get_safe_python_executable()
# Use bundled wrapper script for reliable invocation
# The wrapper script imports cli and calls it with sys.argv
wrapper_script = self._get_bundled_protontricks_wrapper_path()
if wrapper_script and Path(wrapper_script).exists():
cmd = [python_exe, str(wrapper_script)]
cmd.extend([str(a) for a in args])
else:
# Fallback: use python -m to run protontricks CLI directly
# This avoids importing protontricks.__init__ which imports gui.py which needs Pillow
cmd = [python_exe, "-m", "protontricks.cli.main"]
cmd.extend([str(a) for a in args])
elif self.which_protontricks == 'flatpak':
cmd = ["flatpak", "run", "com.github.Matoking.protontricks"]
cmd.extend(args)
else: # native
cmd = ["protontricks"]
cmd.extend(args) cmd.extend(args)
# Default to capturing stdout/stderr unless specified otherwise in kwargs # Default to capturing stdout/stderr unless specified otherwise in kwargs
@@ -271,16 +309,71 @@ class ProtontricksHandler:
'text': True, 'text': True,
**kwargs # Allow overriding defaults (like stderr=DEVNULL) **kwargs # Allow overriding defaults (like stderr=DEVNULL)
} }
# PyInstaller fix: Use cleaned environment for all protontricks calls
# Handle environment: if env was passed in kwargs, merge it with our clean env
# Otherwise create a clean env from scratch
if 'env' in kwargs and kwargs['env']:
# Merge passed env with our clean env (our values take precedence)
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
env.update(kwargs['env']) # Merge passed env, but our clean env is base
# Re-apply our critical settings after merge to ensure they're set
else:
# Bundled-runtime fix: Use cleaned environment for all protontricks calls
env = self._get_clean_subprocess_env()
# Suppress Wine debug output # Suppress Wine debug output
env['WINEDEBUG'] = '-all' env['WINEDEBUG'] = '-all'
# CRITICAL: Set STEAM_DIR based on libraryfolders.vdf to prevent user prompts
steam_dir = self._get_steam_dir_from_libraryfolders()
if steam_dir:
env['STEAM_DIR'] = str(steam_dir)
logger.debug(f"Set STEAM_DIR for protontricks: {steam_dir}")
else:
logger.warning("Could not determine STEAM_DIR from libraryfolders.vdf - protontricks may prompt user")
# CRITICAL: Only set bundled winetricks for NATIVE protontricks
# Flatpak protontricks runs in a sandbox and CANNOT access AppImage FUSE mounts (/tmp/.mount_*)
# Flatpak protontricks has its own winetricks bundled inside the flatpak
if self.which_protontricks == 'native':
winetricks_path = self._get_bundled_winetricks_path()
if winetricks_path:
env['WINETRICKS'] = str(winetricks_path)
logger.debug(f"Set WINETRICKS for native protontricks: {winetricks_path}")
else:
logger.warning("Bundled winetricks not found - native protontricks will use system winetricks")
cabextract_path = self._get_bundled_cabextract_path()
if cabextract_path:
cabextract_dir = str(cabextract_path.parent)
current_path = env.get('PATH', '')
env['PATH'] = f"{cabextract_dir}{os.pathsep}{current_path}" if current_path else cabextract_dir
logger.debug(f"Added bundled cabextract to PATH for native protontricks: {cabextract_dir}")
else:
logger.warning("Bundled cabextract not found - native protontricks will use system cabextract")
else:
# Flatpak protontricks - DO NOT set bundled paths
logger.debug(f"Using {self.which_protontricks} protontricks - it has its own winetricks (cannot access AppImage mounts)")
# CRITICAL: Suppress winetricks verbose output when not in debug mode
# WINETRICKS_SUPER_QUIET suppresses "Executing..." messages from winetricks
from ..handlers.config_handler import ConfigHandler
config_handler = ConfigHandler()
debug_mode = config_handler.get('debug_mode', False)
if not debug_mode:
env['WINETRICKS_SUPER_QUIET'] = '1'
logger.debug("Set WINETRICKS_SUPER_QUIET=1 to suppress winetricks verbose output")
else:
logger.debug("Debug mode enabled - winetricks verbose output will be shown")
# Note: No need to modify LD_LIBRARY_PATH for Wine/Proton as it's a system dependency
# Wine/Proton finds its own libraries through the system's library search paths
run_kwargs['env'] = env run_kwargs['env'] = env
try: try:
return subprocess.run(cmd, **run_kwargs) return subprocess.run(cmd, **run_kwargs)
except Exception as e: except Exception as e:
logger.error(f"Error running protontricks: {e}") logger.error(f"Error running protontricks: {e}")
# Consider returning a mock CompletedProcess with an error code?
return None return None
def set_protontricks_permissions(self, modlist_dir, steamdeck=False): def set_protontricks_permissions(self, modlist_dir, steamdeck=False):
@@ -304,7 +397,7 @@ class ProtontricksHandler:
logger.info("Setting Protontricks permissions...") logger.info("Setting Protontricks permissions...")
try: try:
# PyInstaller fix: Use cleaned environment # Bundled-runtime fix: Use cleaned environment
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
subprocess.run(["flatpak", "override", "--user", "com.github.Matoking.protontricks", subprocess.run(["flatpak", "override", "--user", "com.github.Matoking.protontricks",
@@ -414,7 +507,7 @@ class ProtontricksHandler:
logger.error("Protontricks path not determined, cannot list shortcuts.") logger.error("Protontricks path not determined, cannot list shortcuts.")
return {} return {}
self.logger.debug(f"Running command: {' '.join(cmd)}") self.logger.debug(f"Running command: {' '.join(cmd)}")
# PyInstaller fix: Use cleaned environment # Bundled-runtime fix: Use cleaned environment
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
result = subprocess.run(cmd, capture_output=True, text=True, check=True, encoding='utf-8', errors='ignore', env=env) result = subprocess.run(cmd, capture_output=True, text=True, check=True, encoding='utf-8', errors='ignore', env=env)
# Regex to capture name and AppID # Regex to capture name and AppID
@@ -566,7 +659,7 @@ class ProtontricksHandler:
Returns True on success, False on failure Returns True on success, False on failure
""" """
try: try:
# PyInstaller fix: Use cleaned environment # Bundled-runtime fix: Use cleaned environment
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
env["WINEDEBUG"] = "-all" env["WINEDEBUG"] = "-all"
@@ -652,16 +745,22 @@ class ProtontricksHandler:
def run_protontricks_launch(self, appid, installer_path, *extra_args): def run_protontricks_launch(self, appid, installer_path, *extra_args):
""" """
Run protontricks-launch (for WebView or similar installers) using the correct method for flatpak or native. Run protontricks-launch (for WebView or similar installers) using the correct method for bundled, flatpak, or native.
Returns subprocess.CompletedProcess object. Returns subprocess.CompletedProcess object.
""" """
if self.which_protontricks is None: if self.which_protontricks is None:
if not self.detect_protontricks(): if not self.detect_protontricks():
self.logger.error("Could not detect protontricks installation") self.logger.error("Could not detect protontricks installation")
return None return None
if self.which_protontricks == 'flatpak': if self.which_protontricks == 'bundled':
# CRITICAL: Use safe Python executable to prevent AppImage recursive spawning
from .subprocess_utils import get_safe_python_executable
python_exe = get_safe_python_executable()
# Use bundled Python module
cmd = [python_exe, "-m", "protontricks.cli.launch", "--appid", appid, str(installer_path)]
elif self.which_protontricks == 'flatpak':
cmd = ["flatpak", "run", "--command=protontricks-launch", "com.github.Matoking.protontricks", "--appid", appid, str(installer_path)] cmd = ["flatpak", "run", "--command=protontricks-launch", "com.github.Matoking.protontricks", "--appid", appid, str(installer_path)]
else: else: # native
launch_path = shutil.which("protontricks-launch") launch_path = shutil.which("protontricks-launch")
if not launch_path: if not launch_path:
self.logger.error("protontricks-launch command not found in PATH.") self.logger.error("protontricks-launch command not found in PATH.")
@@ -671,7 +770,7 @@ class ProtontricksHandler:
cmd.extend(extra_args) cmd.extend(extra_args)
self.logger.debug(f"Running protontricks-launch: {' '.join(map(str, cmd))}") self.logger.debug(f"Running protontricks-launch: {' '.join(map(str, cmd))}")
try: try:
# PyInstaller fix: Use cleaned environment # Bundled-runtime fix: Use cleaned environment
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
return subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, env=env) return subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True, env=env)
except Exception as e: except Exception as e:
@@ -685,6 +784,44 @@ class ProtontricksHandler:
""" """
env = self._get_clean_subprocess_env() env = self._get_clean_subprocess_env()
env["WINEDEBUG"] = "-all" env["WINEDEBUG"] = "-all"
# CRITICAL: Only set bundled winetricks for NATIVE protontricks
# Flatpak protontricks runs in a sandbox and CANNOT access AppImage FUSE mounts (/tmp/.mount_*)
# Flatpak protontricks has its own winetricks bundled inside the flatpak
if self.which_protontricks == 'native':
winetricks_path = self._get_bundled_winetricks_path()
if winetricks_path:
env['WINETRICKS'] = str(winetricks_path)
self.logger.debug(f"Set WINETRICKS for native protontricks: {winetricks_path}")
else:
self.logger.warning("Bundled winetricks not found - native protontricks will use system winetricks")
cabextract_path = self._get_bundled_cabextract_path()
if cabextract_path:
cabextract_dir = str(cabextract_path.parent)
current_path = env.get('PATH', '')
env['PATH'] = f"{cabextract_dir}{os.pathsep}{current_path}" if current_path else cabextract_dir
self.logger.debug(f"Added bundled cabextract to PATH for native protontricks: {cabextract_dir}")
else:
self.logger.warning("Bundled cabextract not found - native protontricks will use system cabextract")
else:
# Flatpak protontricks - DO NOT set bundled paths
self.logger.info(f"Using {self.which_protontricks} protontricks - it has its own winetricks (cannot access AppImage mounts)")
# CRITICAL: Suppress winetricks verbose output when not in debug mode
from ..handlers.config_handler import ConfigHandler
config_handler = ConfigHandler()
debug_mode = config_handler.get('debug_mode', False)
if not debug_mode:
env['WINETRICKS_SUPER_QUIET'] = '1'
self.logger.debug("Set WINETRICKS_SUPER_QUIET=1 in install_wine_components to suppress winetricks verbose output")
# Set up winetricks cache (shared with winetricks_handler for efficiency)
from jackify.shared.paths import get_jackify_data_dir
jackify_cache_dir = get_jackify_data_dir() / 'winetricks_cache'
jackify_cache_dir.mkdir(parents=True, exist_ok=True)
env['WINETRICKS_CACHE'] = str(jackify_cache_dir)
self.logger.info(f"Using winetricks cache: {jackify_cache_dir}")
if specific_components is not None: if specific_components is not None:
components_to_install = specific_components components_to_install = specific_components
self.logger.info(f"Installing specific components: {components_to_install}") self.logger.info(f"Installing specific components: {components_to_install}")
@@ -716,8 +853,25 @@ class ProtontricksHandler:
# Continue to retry # Continue to retry
else: else:
self.logger.error(f"Protontricks command failed (Attempt {attempt}/{max_attempts}). Return Code: {result.returncode if result else 'N/A'}") self.logger.error(f"Protontricks command failed (Attempt {attempt}/{max_attempts}). Return Code: {result.returncode if result else 'N/A'}")
# Only show stdout/stderr in debug mode to avoid verbose output
from ..handlers.config_handler import ConfigHandler
config_handler = ConfigHandler()
debug_mode = config_handler.get('debug_mode', False)
if debug_mode:
self.logger.error(f"Stdout: {result.stdout.strip() if result else ''}") self.logger.error(f"Stdout: {result.stdout.strip() if result else ''}")
self.logger.error(f"Stderr: {result.stderr.strip() if result else ''}") self.logger.error(f"Stderr: {result.stderr.strip() if result else ''}")
else:
# In non-debug mode, only show stderr if it contains actual errors (not verbose winetricks output)
if result and result.stderr:
stderr_lower = result.stderr.lower()
# Filter out verbose winetricks messages
if any(keyword in stderr_lower for keyword in ['error', 'failed', 'cannot', 'warning: cannot find']):
# Only show actual errors, not "Executing..." messages
error_lines = [line for line in result.stderr.strip().split('\n')
if any(keyword in line.lower() for keyword in ['error', 'failed', 'cannot', 'warning: cannot find'])
and 'executing' not in line.lower()]
if error_lines:
self.logger.error(f"Stderr (errors only): {' '.join(error_lines)}")
except Exception as e: except Exception as e:
self.logger.error(f"Error during protontricks run (Attempt {attempt}/{max_attempts}): {e}", exc_info=True) self.logger.error(f"Error during protontricks run (Attempt {attempt}/{max_attempts}): {e}", exc_info=True)
self.logger.error(f"Failed to install Wine components after {max_attempts} attempts.") self.logger.error(f"Failed to install Wine components after {max_attempts} attempts.")

View File

@@ -198,7 +198,10 @@ class ShortcutHandler:
if steam_vdf_spec is None: if steam_vdf_spec is None:
# Try to install steam-vdf using pip # Try to install steam-vdf using pip
print("Installing required dependency (steam-vdf)...") print("Installing required dependency (steam-vdf)...")
subprocess.check_call([sys.executable, "-m", "pip", "install", "steam-vdf", "--user"]) # CRITICAL: Use safe Python executable to prevent AppImage recursive spawning
from jackify.backend.handlers.subprocess_utils import get_safe_python_executable
python_exe = get_safe_python_executable()
subprocess.check_call([python_exe, "-m", "pip", "install", "steam-vdf", "--user"])
time.sleep(1) # Give some time for the install to complete time.sleep(1) # Give some time for the install to complete
# Now import it # Now import it

View File

@@ -3,17 +3,119 @@ import signal
import subprocess import subprocess
import time import time
import resource import resource
import sys
import shutil
def get_safe_python_executable():
"""
Get a safe Python executable for subprocess calls.
When running as AppImage, returns system Python instead of AppImage path
to prevent recursive AppImage spawning.
Returns:
str: Path to Python executable safe for subprocess calls
"""
# Check if we're running as AppImage
is_appimage = (
'APPIMAGE' in os.environ or
'APPDIR' in os.environ or
(hasattr(sys, 'frozen') and sys.frozen) or
(sys.argv[0] and sys.argv[0].endswith('.AppImage'))
)
if is_appimage:
# Running as AppImage - use system Python to avoid recursive spawning
# Try to find system Python (same logic as AppRun)
for cmd in ['python3', 'python3.13', 'python3.12', 'python3.11', 'python3.10', 'python3.9', 'python3.8']:
python_path = shutil.which(cmd)
if python_path:
return python_path
# Fallback: if we can't find system Python, this is a problem
# But we'll still return sys.executable as last resort
return sys.executable
else:
# Not AppImage - sys.executable is safe
return sys.executable
def get_clean_subprocess_env(extra_env=None): def get_clean_subprocess_env(extra_env=None):
""" """
Returns a copy of os.environ with PyInstaller and other problematic variables removed. Returns a copy of os.environ with bundled-runtime variables and other problematic entries removed.
Optionally merges in extra_env dict. Optionally merges in extra_env dict.
Also ensures bundled tools (lz4, unzip, etc.) are in PATH when running as AppImage.
CRITICAL: Preserves system PATH to ensure system tools (like lz4) are available.
""" """
from pathlib import Path
env = os.environ.copy() env = os.environ.copy()
# Remove PyInstaller-specific variables
# Remove AppImage-specific variables that can confuse subprocess calls
# These variables cause subprocesses to be interpreted as new AppImage launches
for key in ['APPIMAGE', 'APPDIR', 'ARGV0', 'OWD']:
env.pop(key, None)
# Remove bundle-specific variables
for k in list(env): for k in list(env):
if k.startswith('_MEIPASS'): if k.startswith('_MEIPASS'):
del env[k] del env[k]
# Get current PATH - ensure we preserve system paths
current_path = env.get('PATH', '')
# Ensure common system directories are in PATH if not already present
# This is critical for tools like lz4 that might be in /usr/bin, /usr/local/bin, etc.
system_paths = ['/usr/bin', '/usr/local/bin', '/bin', '/sbin', '/usr/sbin']
path_parts = current_path.split(':') if current_path else []
for sys_path in system_paths:
if sys_path not in path_parts and os.path.isdir(sys_path):
path_parts.append(sys_path)
# Add bundled tools directory to PATH if running as AppImage
# This ensures lz4, unzip, xz, etc. are available to subprocesses
appdir = env.get('APPDIR')
tools_dir = None
if appdir:
# Running as AppImage - use APPDIR
tools_dir = os.path.join(appdir, 'opt', 'jackify', 'tools')
# Verify the tools directory exists and contains lz4
if not os.path.isdir(tools_dir):
tools_dir = None
elif not os.path.exists(os.path.join(tools_dir, 'lz4')):
# Tools dir exists but lz4 not there - might be a different layout
tools_dir = None
elif getattr(sys, 'frozen', False):
# PyInstaller frozen - try to find tools relative to executable
exe_path = Path(sys.executable)
# In PyInstaller, sys.executable is the bundled executable
# Tools should be in the same directory or a tools subdirectory
possible_tools_dirs = [
exe_path.parent / 'tools',
exe_path.parent / 'opt' / 'jackify' / 'tools',
]
for possible_dir in possible_tools_dirs:
if possible_dir.is_dir() and (possible_dir / 'lz4').exists():
tools_dir = str(possible_dir)
break
# Build final PATH: bundled tools first (if any), then original PATH with system paths
final_path_parts = []
if tools_dir and os.path.isdir(tools_dir):
# Prepend tools directory so bundled tools take precedence
# This is critical - bundled lz4 must come before system lz4
final_path_parts.append(tools_dir)
# Add all other paths (preserving order, removing duplicates)
# Note: AppRun already sets PATH with tools directory, but we ensure it's first
seen = set()
if tools_dir:
seen.add(tools_dir) # Already added, don't add again
for path_part in path_parts:
if path_part and path_part not in seen:
final_path_parts.append(path_part)
seen.add(path_part)
env['PATH'] = ':'.join(final_path_parts)
# Optionally restore LD_LIBRARY_PATH to system default if needed # Optionally restore LD_LIBRARY_PATH to system default if needed
# (You can add more logic here if you know your system's default) # (You can add more logic here if you know your system's default)
if extra_env: if extra_env:
@@ -59,6 +161,10 @@ class ProcessManager:
""" """
def __init__(self, cmd, env=None, cwd=None, text=False, bufsize=0): def __init__(self, cmd, env=None, cwd=None, text=False, bufsize=0):
self.cmd = cmd self.cmd = cmd
# Default to cleaned environment if None to prevent AppImage variable inheritance
if env is None:
self.env = get_clean_subprocess_env()
else:
self.env = env self.env = env
self.cwd = cwd self.cwd = cwd
self.text = text self.text = text

View File

@@ -0,0 +1,742 @@
"""
TTW_Linux_Installer Handler
Handles downloading, installation, and execution of TTW_Linux_Installer for TTW installations.
Replaces hoolamike for TTW-specific functionality.
"""
import logging
import os
import subprocess
import tarfile
import zipfile
from pathlib import Path
from typing import Optional, Tuple
import requests
from .path_handler import PathHandler
from .filesystem_handler import FileSystemHandler
from .config_handler import ConfigHandler
from .logging_handler import LoggingHandler
from .subprocess_utils import get_clean_subprocess_env
logger = logging.getLogger(__name__)
# Define default TTW_Linux_Installer paths
JACKIFY_BASE_DIR = Path.home() / "Jackify"
DEFAULT_TTW_INSTALLER_DIR = JACKIFY_BASE_DIR / "TTW_Linux_Installer"
TTW_INSTALLER_EXECUTABLE_NAME = "ttw_linux_gui" # Same executable, runs in CLI mode with args
# GitHub release info
TTW_INSTALLER_REPO = "SulfurNitride/TTW_Linux_Installer"
TTW_INSTALLER_RELEASE_URL = f"https://api.github.com/repos/{TTW_INSTALLER_REPO}/releases/latest"
class TTWInstallerHandler:
"""Handles TTW installation using TTW_Linux_Installer (replaces hoolamike for TTW)."""
def __init__(self, steamdeck: bool, verbose: bool, filesystem_handler: FileSystemHandler,
config_handler: ConfigHandler, menu_handler=None):
"""Initialize the handler."""
self.steamdeck = steamdeck
self.verbose = verbose
self.path_handler = PathHandler()
self.filesystem_handler = filesystem_handler
self.config_handler = config_handler
self.menu_handler = menu_handler
# Set up logging
logging_handler = LoggingHandler()
logging_handler.rotate_log_for_logger('ttw-install', 'TTW_Install_workflow.log')
self.logger = logging_handler.setup_logger('ttw-install', 'TTW_Install_workflow.log')
# Installation paths
self.ttw_installer_dir: Path = DEFAULT_TTW_INSTALLER_DIR
self.ttw_installer_executable_path: Optional[Path] = None
self.ttw_installer_installed: bool = False
# Load saved install path from config
saved_path_str = self.config_handler.get('ttw_installer_install_path')
if saved_path_str and Path(saved_path_str).is_dir():
self.ttw_installer_dir = Path(saved_path_str)
self.logger.info(f"Loaded TTW_Linux_Installer path from config: {self.ttw_installer_dir}")
# Check if already installed
self._check_installation()
def _ensure_dirs_exist(self):
"""Ensure base directories exist."""
self.ttw_installer_dir.mkdir(parents=True, exist_ok=True)
def _check_installation(self):
"""Check if TTW_Linux_Installer is installed at expected location."""
self._ensure_dirs_exist()
potential_exe_path = self.ttw_installer_dir / TTW_INSTALLER_EXECUTABLE_NAME
if potential_exe_path.is_file() and os.access(potential_exe_path, os.X_OK):
self.ttw_installer_executable_path = potential_exe_path
self.ttw_installer_installed = True
self.logger.info(f"Found TTW_Linux_Installer at: {self.ttw_installer_executable_path}")
else:
self.ttw_installer_installed = False
self.ttw_installer_executable_path = None
self.logger.info(f"TTW_Linux_Installer not found at {potential_exe_path}")
def install_ttw_installer(self, install_dir: Optional[Path] = None) -> Tuple[bool, str]:
"""Download and install TTW_Linux_Installer from GitHub releases.
Args:
install_dir: Optional directory to install to (defaults to ~/Jackify/TTW_Linux_Installer)
Returns:
(success: bool, message: str)
"""
try:
self._ensure_dirs_exist()
target_dir = Path(install_dir) if install_dir else self.ttw_installer_dir
target_dir.mkdir(parents=True, exist_ok=True)
# Fetch latest release info
self.logger.info(f"Fetching latest TTW_Linux_Installer release from {TTW_INSTALLER_RELEASE_URL}")
resp = requests.get(TTW_INSTALLER_RELEASE_URL, timeout=15, verify=True)
resp.raise_for_status()
data = resp.json()
release_tag = data.get("tag_name") or data.get("name")
# Find Linux asset - universal-mpi-installer pattern (can be .zip or .tar.gz)
linux_asset = None
asset_names = [asset.get("name", "") for asset in data.get("assets", [])]
self.logger.info(f"Available release assets: {asset_names}")
for asset in data.get("assets", []):
name = asset.get("name", "").lower()
# Look for universal-mpi-installer pattern
if "universal-mpi-installer" in name and name.endswith((".zip", ".tar.gz")):
linux_asset = asset
self.logger.info(f"Found Linux asset: {asset.get('name')}")
break
if not linux_asset:
# Log all available assets for debugging
all_assets = [asset.get("name", "") for asset in data.get("assets", [])]
self.logger.error(f"No suitable Linux asset found. Available assets: {all_assets}")
return False, f"No suitable Linux TTW_Linux_Installer asset found in latest release. Available assets: {', '.join(all_assets)}"
download_url = linux_asset.get("browser_download_url")
asset_name = linux_asset.get("name")
if not download_url or not asset_name:
return False, "Latest release is missing required asset metadata"
# Download to target directory
temp_path = target_dir / asset_name
self.logger.info(f"Downloading {asset_name} from {download_url}")
if not self.filesystem_handler.download_file(download_url, temp_path, overwrite=True, quiet=True):
return False, "Failed to download TTW_Linux_Installer asset"
# Extract archive (zip or tar.gz)
try:
self.logger.info(f"Extracting {asset_name} to {target_dir}")
if asset_name.lower().endswith('.tar.gz'):
with tarfile.open(temp_path, "r:gz") as tf:
tf.extractall(path=target_dir)
elif asset_name.lower().endswith('.zip'):
with zipfile.ZipFile(temp_path, "r") as zf:
zf.extractall(path=target_dir)
else:
return False, f"Unsupported archive format: {asset_name}"
finally:
try:
temp_path.unlink(missing_ok=True) # cleanup
except Exception:
pass
# Find executable (may be in subdirectory or root)
exe_path = target_dir / TTW_INSTALLER_EXECUTABLE_NAME
if not exe_path.is_file():
# Search for it
for p in target_dir.rglob(TTW_INSTALLER_EXECUTABLE_NAME):
if p.is_file():
exe_path = p
break
if not exe_path.is_file():
return False, "TTW_Linux_Installer executable not found after extraction"
# Set executable permissions
try:
os.chmod(exe_path, 0o755)
except Exception as e:
self.logger.warning(f"Failed to chmod +x on {exe_path}: {e}")
# Update state
self.ttw_installer_dir = target_dir
self.ttw_installer_executable_path = exe_path
self.ttw_installer_installed = True
self.config_handler.set('ttw_installer_install_path', str(target_dir))
if release_tag:
self.config_handler.set('ttw_installer_version', release_tag)
self.logger.info(f"TTW_Linux_Installer installed successfully at {exe_path}")
return True, f"TTW_Linux_Installer installed at {target_dir}"
except Exception as e:
self.logger.error(f"Error installing TTW_Linux_Installer: {e}", exc_info=True)
return False, f"Error installing TTW_Linux_Installer: {e}"
def get_installed_ttw_installer_version(self) -> Optional[str]:
"""Return the installed TTW_Linux_Installer version stored in Jackify config, if any."""
try:
v = self.config_handler.get('ttw_installer_version')
return str(v) if v else None
except Exception:
return None
def is_ttw_installer_update_available(self) -> Tuple[bool, Optional[str], Optional[str]]:
"""
Check GitHub for the latest TTW_Linux_Installer release and compare with installed version.
Returns (update_available, installed_version, latest_version).
"""
installed = self.get_installed_ttw_installer_version()
# If executable exists but no version is recorded, don't show as "out of date"
# This can happen if the executable was installed before version tracking was added
if not installed and self.ttw_installer_installed:
self.logger.info("TTW_Linux_Installer executable found but no version recorded in config")
# Don't treat as update available - just show as "Ready" (unknown version)
return (False, None, None)
try:
resp = requests.get(TTW_INSTALLER_RELEASE_URL, timeout=10, verify=True)
resp.raise_for_status()
latest = resp.json().get('tag_name') or resp.json().get('name')
if not latest:
return (False, installed, None)
if not installed:
# No version recorded and executable doesn't exist; treat as not installed
return (False, None, str(latest))
return (installed != str(latest), installed, str(latest))
except Exception as e:
self.logger.warning(f"Error checking for TTW_Linux_Installer updates: {e}")
return (False, installed, None)
def install_ttw_backend(self, ttw_mpi_path: Path, ttw_output_path: Path) -> Tuple[bool, str]:
"""Install TTW using TTW_Linux_Installer.
Args:
ttw_mpi_path: Path to TTW .mpi file
ttw_output_path: Target installation directory
Returns:
(success: bool, message: str)
"""
self.logger.info("Starting Tale of Two Wastelands installation via TTW_Linux_Installer")
# Validate parameters
if not ttw_mpi_path or not ttw_output_path:
return False, "Missing required parameters: ttw_mpi_path and ttw_output_path are required"
ttw_mpi_path = Path(ttw_mpi_path)
ttw_output_path = Path(ttw_output_path)
# Validate paths
if not ttw_mpi_path.exists():
return False, f"TTW .mpi file not found: {ttw_mpi_path}"
if not ttw_mpi_path.is_file():
return False, f"TTW .mpi path is not a file: {ttw_mpi_path}"
if ttw_mpi_path.suffix.lower() != '.mpi':
return False, f"TTW path does not have .mpi extension: {ttw_mpi_path}"
if not ttw_output_path.exists():
try:
ttw_output_path.mkdir(parents=True, exist_ok=True)
except Exception as e:
return False, f"Failed to create output directory: {e}"
# Check installation
if not self.ttw_installer_installed:
# Try to install automatically
self.logger.info("TTW_Linux_Installer not found, attempting to install...")
success, message = self.install_ttw_installer()
if not success:
return False, f"TTW_Linux_Installer not installed and auto-install failed: {message}"
if not self.ttw_installer_executable_path or not self.ttw_installer_executable_path.is_file():
return False, "TTW_Linux_Installer executable not found"
# Detect game paths
required_games = ['Fallout 3', 'Fallout New Vegas']
detected_games = self.path_handler.find_vanilla_game_paths()
missing_games = [game for game in required_games if game not in detected_games]
if missing_games:
return False, f"Missing required games: {', '.join(missing_games)}. TTW requires both Fallout 3 and Fallout New Vegas."
fallout3_path = detected_games.get('Fallout 3')
falloutnv_path = detected_games.get('Fallout New Vegas')
if not fallout3_path or not falloutnv_path:
return False, "Could not detect Fallout 3 or Fallout New Vegas installation paths"
# Construct command - run in CLI mode with arguments
cmd = [
str(self.ttw_installer_executable_path),
"--fo3", str(fallout3_path),
"--fnv", str(falloutnv_path),
"--mpi", str(ttw_mpi_path),
"--output", str(ttw_output_path),
"--start"
]
self.logger.info(f"Executing TTW_Linux_Installer: {' '.join(cmd)}")
try:
env = get_clean_subprocess_env()
process = subprocess.Popen(
cmd,
cwd=str(self.ttw_installer_dir),
env=env,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
)
# Stream output to logger
if process.stdout:
for line in process.stdout:
line = line.rstrip()
if line:
self.logger.info(f"TTW_Linux_Installer: {line}")
process.wait()
ret = process.returncode
if ret == 0:
self.logger.info("TTW installation completed successfully.")
return True, "TTW installation completed successfully!"
else:
self.logger.error(f"TTW installation process returned non-zero exit code: {ret}")
return False, f"TTW installation failed with exit code {ret}"
except Exception as e:
self.logger.error(f"Error executing TTW_Linux_Installer: {e}", exc_info=True)
return False, f"Error executing TTW_Linux_Installer: {e}"
def start_ttw_installation(self, ttw_mpi_path: Path, ttw_output_path: Path, output_file: Path):
"""Start TTW installation process (non-blocking).
Starts the TTW_Linux_Installer subprocess with output redirected to a file.
Returns immediately with process handle. Caller should poll process and read output file.
Args:
ttw_mpi_path: Path to TTW .mpi file
ttw_output_path: Target installation directory
output_file: Path to file where stdout/stderr will be written
Returns:
(process: subprocess.Popen, error_message: str) - process is None if failed
"""
self.logger.info("Starting TTW installation (non-blocking mode)")
# Validate parameters
if not ttw_mpi_path or not ttw_output_path:
return None, "Missing required parameters: ttw_mpi_path and ttw_output_path are required"
ttw_mpi_path = Path(ttw_mpi_path)
ttw_output_path = Path(ttw_output_path)
# Validate paths
if not ttw_mpi_path.exists():
return None, f"TTW .mpi file not found: {ttw_mpi_path}"
if not ttw_mpi_path.is_file():
return None, f"TTW .mpi path is not a file: {ttw_mpi_path}"
if ttw_mpi_path.suffix.lower() != '.mpi':
return None, f"TTW path does not have .mpi extension: {ttw_mpi_path}"
if not ttw_output_path.exists():
try:
ttw_output_path.mkdir(parents=True, exist_ok=True)
except Exception as e:
return None, f"Failed to create output directory: {e}"
# Check installation
if not self.ttw_installer_installed:
self.logger.info("TTW_Linux_Installer not found, attempting to install...")
success, message = self.install_ttw_installer()
if not success:
return None, f"TTW_Linux_Installer not installed and auto-install failed: {message}"
if not self.ttw_installer_executable_path or not self.ttw_installer_executable_path.is_file():
return None, "TTW_Linux_Installer executable not found"
# Detect game paths
required_games = ['Fallout 3', 'Fallout New Vegas']
detected_games = self.path_handler.find_vanilla_game_paths()
missing_games = [game for game in required_games if game not in detected_games]
if missing_games:
return None, f"Missing required games: {', '.join(missing_games)}. TTW requires both Fallout 3 and Fallout New Vegas."
fallout3_path = detected_games.get('Fallout 3')
falloutnv_path = detected_games.get('Fallout New Vegas')
if not fallout3_path or not falloutnv_path:
return None, "Could not detect Fallout 3 or Fallout New Vegas installation paths"
# Construct command
cmd = [
str(self.ttw_installer_executable_path),
"--fo3", str(fallout3_path),
"--fnv", str(falloutnv_path),
"--mpi", str(ttw_mpi_path),
"--output", str(ttw_output_path),
"--start"
]
self.logger.info(f"Executing TTW_Linux_Installer: {' '.join(cmd)}")
try:
env = get_clean_subprocess_env()
# Ensure lz4 is in PATH (critical for TTW_Linux_Installer)
import shutil
appdir = env.get('APPDIR')
if appdir:
tools_dir = os.path.join(appdir, 'opt', 'jackify', 'tools')
bundled_lz4 = os.path.join(tools_dir, 'lz4')
if os.path.exists(bundled_lz4) and os.access(bundled_lz4, os.X_OK):
current_path = env.get('PATH', '')
path_parts = [p for p in current_path.split(':') if p and p != tools_dir]
env['PATH'] = f"{tools_dir}:{':'.join(path_parts)}"
self.logger.info(f"Added bundled lz4 to PATH: {tools_dir}")
# Verify lz4 is available
lz4_path = shutil.which('lz4', path=env.get('PATH', ''))
if not lz4_path:
system_lz4 = shutil.which('lz4')
if system_lz4:
lz4_dir = os.path.dirname(system_lz4)
env['PATH'] = f"{lz4_dir}:{env.get('PATH', '')}"
self.logger.info(f"Added system lz4 to PATH: {lz4_dir}")
else:
return None, "lz4 is required but not found in PATH"
# Open output file for writing
output_fh = open(output_file, 'w', encoding='utf-8', buffering=1)
# Start process with output redirected to file
process = subprocess.Popen(
cmd,
cwd=str(self.ttw_installer_dir),
env=env,
stdout=output_fh,
stderr=subprocess.STDOUT,
bufsize=1
)
self.logger.info(f"TTW_Linux_Installer process started (PID: {process.pid}), output to {output_file}")
# Store file handle so it can be closed later
process._output_fh = output_fh
return process, None
except Exception as e:
self.logger.error(f"Error starting TTW_Linux_Installer: {e}", exc_info=True)
return None, f"Error starting TTW_Linux_Installer: {e}"
@staticmethod
def cleanup_ttw_process(process):
"""Clean up after TTW installation process.
Closes file handles and ensures process is terminated properly.
Args:
process: subprocess.Popen object from start_ttw_installation()
"""
if process:
# Close output file handle if attached
if hasattr(process, '_output_fh'):
try:
process._output_fh.close()
except Exception:
pass
# Terminate if still running
if process.poll() is None:
try:
process.terminate()
process.wait(timeout=5)
except Exception:
try:
process.kill()
except Exception:
pass
def install_ttw_backend_with_output_stream(self, ttw_mpi_path: Path, ttw_output_path: Path, output_callback=None):
"""Install TTW with streaming output for GUI (DEPRECATED - use start_ttw_installation instead).
Args:
ttw_mpi_path: Path to TTW .mpi file
ttw_output_path: Target installation directory
output_callback: Optional callback function(line: str) for real-time output
Returns:
(success: bool, message: str)
"""
self.logger.info("Starting Tale of Two Wastelands installation via TTW_Linux_Installer (with output stream)")
# Validate parameters (same as install_ttw_backend)
if not ttw_mpi_path or not ttw_output_path:
return False, "Missing required parameters: ttw_mpi_path and ttw_output_path are required"
ttw_mpi_path = Path(ttw_mpi_path)
ttw_output_path = Path(ttw_output_path)
# Validate paths
if not ttw_mpi_path.exists():
return False, f"TTW .mpi file not found: {ttw_mpi_path}"
if not ttw_mpi_path.is_file():
return False, f"TTW .mpi path is not a file: {ttw_mpi_path}"
if ttw_mpi_path.suffix.lower() != '.mpi':
return False, f"TTW path does not have .mpi extension: {ttw_mpi_path}"
if not ttw_output_path.exists():
try:
ttw_output_path.mkdir(parents=True, exist_ok=True)
except Exception as e:
return False, f"Failed to create output directory: {e}"
# Check installation
if not self.ttw_installer_installed:
if output_callback:
output_callback("TTW_Linux_Installer not found, installing...")
self.logger.info("TTW_Linux_Installer not found, attempting to install...")
success, message = self.install_ttw_installer()
if not success:
return False, f"TTW_Linux_Installer not installed and auto-install failed: {message}"
if not self.ttw_installer_executable_path or not self.ttw_installer_executable_path.is_file():
return False, "TTW_Linux_Installer executable not found"
# Detect game paths
required_games = ['Fallout 3', 'Fallout New Vegas']
detected_games = self.path_handler.find_vanilla_game_paths()
missing_games = [game for game in required_games if game not in detected_games]
if missing_games:
return False, f"Missing required games: {', '.join(missing_games)}. TTW requires both Fallout 3 and Fallout New Vegas."
fallout3_path = detected_games.get('Fallout 3')
falloutnv_path = detected_games.get('Fallout New Vegas')
if not fallout3_path or not falloutnv_path:
return False, "Could not detect Fallout 3 or Fallout New Vegas installation paths"
# Construct command
cmd = [
str(self.ttw_installer_executable_path),
"--fo3", str(fallout3_path),
"--fnv", str(falloutnv_path),
"--mpi", str(ttw_mpi_path),
"--output", str(ttw_output_path),
"--start"
]
self.logger.info(f"Executing TTW_Linux_Installer: {' '.join(cmd)}")
try:
env = get_clean_subprocess_env()
process = subprocess.Popen(
cmd,
cwd=str(self.ttw_installer_dir),
env=env,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
bufsize=1,
universal_newlines=True
)
# Stream output to both logger and callback
if process.stdout:
for line in process.stdout:
line = line.rstrip()
if line:
self.logger.info(f"TTW_Linux_Installer: {line}")
if output_callback:
output_callback(line)
process.wait()
ret = process.returncode
if ret == 0:
self.logger.info("TTW installation completed successfully.")
return True, "TTW installation completed successfully!"
else:
self.logger.error(f"TTW installation process returned non-zero exit code: {ret}")
return False, f"TTW installation failed with exit code {ret}"
except Exception as e:
self.logger.error(f"Error executing TTW_Linux_Installer: {e}", exc_info=True)
return False, f"Error executing TTW_Linux_Installer: {e}"
@staticmethod
def integrate_ttw_into_modlist(ttw_output_path: Path, modlist_install_dir: Path, ttw_version: str) -> bool:
"""Integrate TTW output into a modlist's MO2 structure
This method:
1. Copies TTW output to the modlist's mods folder
2. Updates modlist.txt for all profiles
3. Updates plugins.txt with TTW ESMs in correct order
Args:
ttw_output_path: Path to TTW output directory
modlist_install_dir: Path to modlist installation directory
ttw_version: TTW version string (e.g., "3.4")
Returns:
bool: True if integration successful, False otherwise
"""
logging_handler = LoggingHandler()
logging_handler.rotate_log_for_logger('ttw-install', 'TTW_Install_workflow.log')
logger = logging_handler.setup_logger('ttw-install', 'TTW_Install_workflow.log')
try:
import shutil
# Validate paths
if not ttw_output_path.exists():
logger.error(f"TTW output path does not exist: {ttw_output_path}")
return False
mods_dir = modlist_install_dir / "mods"
profiles_dir = modlist_install_dir / "profiles"
if not mods_dir.exists() or not profiles_dir.exists():
logger.error(f"Invalid modlist directory structure: {modlist_install_dir}")
return False
# Create mod folder name with version
mod_folder_name = f"[NoDelete] Tale of Two Wastelands {ttw_version}" if ttw_version else "[NoDelete] Tale of Two Wastelands"
target_mod_dir = mods_dir / mod_folder_name
# Copy TTW output to mods directory
logger.info(f"Copying TTW output to {target_mod_dir}")
if target_mod_dir.exists():
logger.info(f"Removing existing TTW mod at {target_mod_dir}")
shutil.rmtree(target_mod_dir)
shutil.copytree(ttw_output_path, target_mod_dir)
logger.info("TTW output copied successfully")
# TTW ESMs in correct load order
ttw_esms = [
"Fallout3.esm",
"Anchorage.esm",
"ThePitt.esm",
"BrokenSteel.esm",
"PointLookout.esm",
"Zeta.esm",
"TaleOfTwoWastelands.esm",
"YUPTTW.esm"
]
# Process each profile
for profile_dir in profiles_dir.iterdir():
if not profile_dir.is_dir():
continue
profile_name = profile_dir.name
logger.info(f"Processing profile: {profile_name}")
# Update modlist.txt
modlist_file = profile_dir / "modlist.txt"
if modlist_file.exists():
# Read existing modlist
with open(modlist_file, 'r', encoding='utf-8') as f:
lines = f.readlines()
# Find the TTW placeholder separator and insert BEFORE it
separator_found = False
ttw_mod_line = f"+{mod_folder_name}\n"
new_lines = []
for line in lines:
# Skip existing TTW mod entries (but keep separators and other TTW-related mods)
# Match patterns: "+[NoDelete] Tale of Two Wastelands", "+[NoDelete] TTW", etc.
stripped = line.strip()
if stripped.startswith('+') and '[nodelete]' in stripped.lower():
# Check if it's the main TTW mod (not other TTW-related mods like "TTW Quick Start")
if ('tale of two wastelands' in stripped.lower() and 'quick start' not in stripped.lower() and
'loading wheel' not in stripped.lower()) or stripped.lower().startswith('+[nodelete] ttw '):
logger.info(f"Removing existing TTW mod entry: {stripped}")
continue
# Insert TTW mod BEFORE the placeholder separator (MO2 order is bottom-up)
# Check BEFORE appending so TTW mod appears before separator in file
if "put tale of two wastelands mod here" in line.lower() and "_separator" in line.lower():
new_lines.append(ttw_mod_line)
separator_found = True
logger.info(f"Inserted TTW mod before separator: {line.strip()}")
new_lines.append(line)
# If no separator found, append at the end
if not separator_found:
new_lines.append(ttw_mod_line)
logger.warning(f"No TTW separator found in {profile_name}, appended to end")
# Write back
with open(modlist_file, 'w', encoding='utf-8') as f:
f.writelines(new_lines)
logger.info(f"Updated modlist.txt for {profile_name}")
else:
logger.warning(f"modlist.txt not found for profile {profile_name}")
# Update plugins.txt
plugins_file = profile_dir / "plugins.txt"
if plugins_file.exists():
# Read existing plugins
with open(plugins_file, 'r', encoding='utf-8') as f:
lines = f.readlines()
# Remove any existing TTW ESMs
ttw_esm_set = set(esm.lower() for esm in ttw_esms)
lines = [line for line in lines if line.strip().lower() not in ttw_esm_set]
# Find CaravanPack.esm and insert TTW ESMs after it
insert_index = None
for i, line in enumerate(lines):
if line.strip().lower() == "caravanpack.esm":
insert_index = i + 1
break
if insert_index is not None:
# Insert TTW ESMs in correct order
for esm in reversed(ttw_esms):
lines.insert(insert_index, f"{esm}\n")
else:
logger.warning(f"CaravanPack.esm not found in {profile_name}, appending TTW ESMs to end")
for esm in ttw_esms:
lines.append(f"{esm}\n")
# Write back
with open(plugins_file, 'w', encoding='utf-8') as f:
f.writelines(lines)
logger.info(f"Updated plugins.txt for {profile_name}")
else:
logger.warning(f"plugins.txt not found for profile {profile_name}")
logger.info("TTW integration completed successfully")
return True
except Exception as e:
logger.error(f"Error integrating TTW into modlist: {e}", exc_info=True)
return False

View File

@@ -1016,8 +1016,8 @@ class WineUtils:
seen_names.add(version['name']) seen_names.add(version['name'])
if unique_versions: if unique_versions:
logger.info(f"Found {len(unique_versions)} total Proton version(s)") logger.debug(f"Found {len(unique_versions)} total Proton version(s)")
logger.info(f"Best available: {unique_versions[0]['name']} ({unique_versions[0]['type']})") logger.debug(f"Best available: {unique_versions[0]['name']} ({unique_versions[0]['type']})")
else: else:
logger.warning("No Proton versions found") logger.warning("No Proton versions found")

View File

@@ -137,6 +137,8 @@ class WinetricksHandler:
from ..handlers.wine_utils import WineUtils from ..handlers.wine_utils import WineUtils
config = ConfigHandler() config = ConfigHandler()
# Use Install Proton for component installation/texture processing
# get_proton_path() returns the Install Proton path
user_proton_path = config.get_proton_path() user_proton_path = config.get_proton_path()
# If user selected a specific Proton, try that first # If user selected a specific Proton, try that first
@@ -162,9 +164,10 @@ class WinetricksHandler:
else: else:
self.logger.warning(f"User-selected Proton no longer exists: {user_proton_path}") self.logger.warning(f"User-selected Proton no longer exists: {user_proton_path}")
# Fall back to auto-detection if user selection failed or is 'auto' # Only auto-detect if user explicitly chose 'auto'
if not wine_binary: if not wine_binary:
self.logger.info("Falling back to automatic Proton detection") if user_proton_path == 'auto':
self.logger.info("Auto-detecting Proton (user selected 'auto')")
best_proton = WineUtils.select_best_proton() best_proton = WineUtils.select_best_proton()
if best_proton: if best_proton:
wine_binary = WineUtils.find_proton_binary(best_proton['name']) wine_binary = WineUtils.find_proton_binary(best_proton['name'])
@@ -177,6 +180,11 @@ class WinetricksHandler:
self.logger.error(f"Available Proton versions: {[v['name'] for v in available_versions]}") self.logger.error(f"Available Proton versions: {[v['name'] for v in available_versions]}")
else: else:
self.logger.error("No Proton versions detected in standard Steam locations") self.logger.error("No Proton versions detected in standard Steam locations")
else:
# User selected a specific Proton but validation failed - this is an ERROR
self.logger.error(f"Cannot use configured Proton: {user_proton_path}")
self.logger.error("Please check Settings and ensure the Proton version still exists")
return False
if not wine_binary: if not wine_binary:
self.logger.error("Cannot run winetricks: No compatible Proton version found") self.logger.error("Cannot run winetricks: No compatible Proton version found")
@@ -269,27 +277,23 @@ class WinetricksHandler:
# Check user preference for component installation method # Check user preference for component installation method
from ..handlers.config_handler import ConfigHandler from ..handlers.config_handler import ConfigHandler
config_handler = ConfigHandler() config_handler = ConfigHandler()
use_winetricks = config_handler.get('use_winetricks_for_components', True)
# Legacy .NET Framework versions that are problematic in Wine/Proton # Get component installation method with migration
# DISABLED in v0.1.6.2: Universal registry fixes replace dotnet4.x installation method = config_handler.get('component_installation_method', 'winetricks')
# legacy_dotnet_versions = ['dotnet40', 'dotnet472', 'dotnet48']
legacy_dotnet_versions = [] # ALL dotnet4.x versions disabled - universal registry fixes handle compatibility
# Check if any legacy .NET Framework versions are present # Migrate bundled_protontricks to system_protontricks (no longer supported)
has_legacy_dotnet = any(comp in components_to_install for comp in legacy_dotnet_versions) if method == 'bundled_protontricks':
self.logger.warning("Bundled protontricks no longer supported, migrating to system_protontricks")
method = 'system_protontricks'
config_handler.set('component_installation_method', 'system_protontricks')
# Choose installation method based on user preference and components # Choose installation method based on user preference
# HYBRID APPROACH MOSTLY DISABLED: dotnet40/dotnet472 replaced with universal registry fixes if method == 'system_protontricks':
if has_legacy_dotnet: self.logger.info("Using system protontricks for all components")
legacy_found = [comp for comp in legacy_dotnet_versions if comp in components_to_install]
self.logger.info(f"Using hybrid approach: protontricks for legacy .NET versions {legacy_found} (reliable), {'winetricks' if use_winetricks else 'protontricks'} for other components")
return self._install_components_hybrid_approach(components_to_install, wineprefix, game_var, use_winetricks)
elif not use_winetricks:
self.logger.info("Using legacy approach: protontricks for all components")
return self._install_components_protontricks_only(components_to_install, wineprefix, game_var) return self._install_components_protontricks_only(components_to_install, wineprefix, game_var)
# else: method == 'winetricks' (default behavior continues below)
# For non-dotnet40 installations, install all components together (faster) # Install all components together with winetricks (faster)
max_attempts = 3 max_attempts = 3
winetricks_failed = False winetricks_failed = False
last_error_details = None last_error_details = None
@@ -361,23 +365,6 @@ class WinetricksHandler:
self.logger.error(f"Component verification failed (Attempt {attempt}/{max_attempts})") self.logger.error(f"Component verification failed (Attempt {attempt}/{max_attempts})")
# Continue to retry # Continue to retry
else: else:
# Special handling for dotnet40 verification issue (mimics protontricks behavior)
if "dotnet40" in components_to_install and "ngen.exe not found" in result.stderr:
self.logger.warning("dotnet40 verification warning (common in Steam Proton prefixes)")
self.logger.info("Checking if dotnet40 was actually installed...")
# Check if dotnet40 appears in winetricks.log (indicates successful installation)
log_path = os.path.join(wineprefix, 'winetricks.log')
if os.path.exists(log_path):
try:
with open(log_path, 'r') as f:
log_content = f.read()
if 'dotnet40' in log_content:
self.logger.info("dotnet40 found in winetricks.log - installation succeeded despite verification warning")
return True
except Exception as e:
self.logger.warning(f"Could not read winetricks.log: {e}")
# Store detailed error information for fallback diagnostics # Store detailed error information for fallback diagnostics
last_error_details = { last_error_details = {
'returncode': result.returncode, 'returncode': result.returncode,
@@ -463,7 +450,8 @@ class WinetricksHandler:
# Check if protontricks is available for fallback using centralized handler # Check if protontricks is available for fallback using centralized handler
try: try:
from .protontricks_handler import ProtontricksHandler from .protontricks_handler import ProtontricksHandler
protontricks_handler = ProtontricksHandler() steamdeck = os.path.exists('/home/deck')
protontricks_handler = ProtontricksHandler(steamdeck)
protontricks_available = protontricks_handler.is_available() protontricks_available = protontricks_handler.is_available()
if protontricks_available: if protontricks_available:
@@ -493,103 +481,24 @@ class WinetricksHandler:
def _reorder_components_for_installation(self, components: list) -> list: def _reorder_components_for_installation(self, components: list) -> list:
""" """
Reorder components for proper installation sequence. Reorder components for proper installation sequence if needed.
Critical: dotnet40 must be installed before dotnet6/dotnet7 to avoid conflicts. Currently returns components in original order.
""" """
# Simple reordering: dotnet40 first, then everything else return components
reordered = []
# Add dotnet40 first if it exists
if "dotnet40" in components:
reordered.append("dotnet40")
# Add all other components in original order
for component in components:
if component != "dotnet40":
reordered.append(component)
if reordered != components:
self.logger.info(f"Reordered for dotnet40 compatibility: {reordered}")
return reordered
def _prepare_prefix_for_dotnet(self, wineprefix: str, wine_binary: str) -> bool:
"""
Prepare the Wine prefix for .NET installation by mimicking protontricks preprocessing.
This removes mono components and specific symlinks that interfere with .NET installation.
"""
try:
env = os.environ.copy()
env['WINEDEBUG'] = '-all'
env['WINEPREFIX'] = wineprefix
# Step 1: Remove mono components (mimics protontricks behavior)
self.logger.info("Preparing prefix for .NET installation: removing mono")
mono_result = subprocess.run([
self.winetricks_path,
'-q',
'remove_mono'
], env=env, capture_output=True, text=True, timeout=300)
if mono_result.returncode != 0:
self.logger.warning(f"Mono removal warning (non-critical): {mono_result.stderr}")
# Step 2: Set Windows version to XP (protontricks uses winxp for dotnet40)
self.logger.info("Setting Windows version to XP for .NET compatibility")
winxp_result = subprocess.run([
self.winetricks_path,
'-q',
'winxp'
], env=env, capture_output=True, text=True, timeout=300)
if winxp_result.returncode != 0:
self.logger.warning(f"Windows XP setting warning: {winxp_result.stderr}")
# Step 3: Remove mscoree.dll symlinks (critical for .NET installation)
self.logger.info("Removing problematic mscoree.dll symlinks")
dosdevices_path = os.path.join(wineprefix, 'dosdevices', 'c:')
mscoree_paths = [
os.path.join(dosdevices_path, 'windows', 'syswow64', 'mscoree.dll'),
os.path.join(dosdevices_path, 'windows', 'system32', 'mscoree.dll')
]
for dll_path in mscoree_paths:
if os.path.exists(dll_path) or os.path.islink(dll_path):
try:
os.remove(dll_path)
self.logger.debug(f"Removed symlink: {dll_path}")
except Exception as e:
self.logger.warning(f"Could not remove {dll_path}: {e}")
self.logger.info("Prefix preparation complete for .NET installation")
return True
except Exception as e:
self.logger.error(f"Error preparing prefix for .NET: {e}")
return False
def _install_components_separately(self, components: list, wineprefix: str, wine_binary: str, base_env: dict) -> bool: def _install_components_separately(self, components: list, wineprefix: str, wine_binary: str, base_env: dict) -> bool:
""" """
Install components separately like protontricks does. Install components separately for maximum compatibility.
This is necessary when dotnet40 is present to avoid component conflicts.
""" """
self.logger.info(f"Installing {len(components)} components separately (protontricks style)") self.logger.info(f"Installing {len(components)} components separately")
for i, component in enumerate(components, 1): for i, component in enumerate(components, 1):
self.logger.info(f"Installing component {i}/{len(components)}: {component}") self.logger.info(f"Installing component {i}/{len(components)}: {component}")
# Prepare environment for this component # Prepare environment for this component
env = base_env.copy() env = base_env.copy()
env['WINEPREFIX'] = wineprefix
# Special preprocessing for dotnet40 only env['WINE'] = wine_binary
if component == "dotnet40":
self.logger.info("Applying dotnet40 preprocessing")
if not self._prepare_prefix_for_dotnet(wineprefix, wine_binary):
self.logger.error("Failed to prepare prefix for dotnet40")
return False
else:
# For non-dotnet40 components, install in standard mode (Windows 10 will be set after all components)
self.logger.debug(f"Installing {component} in standard mode")
# Install this component # Install this component
max_attempts = 3 max_attempts = 3
@@ -602,9 +511,6 @@ class WinetricksHandler:
try: try:
cmd = [self.winetricks_path, '--unattended', component] cmd = [self.winetricks_path, '--unattended', component]
env['WINEPREFIX'] = wineprefix
env['WINE'] = wine_binary
self.logger.debug(f"Running: {' '.join(cmd)}") self.logger.debug(f"Running: {' '.join(cmd)}")
result = subprocess.run( result = subprocess.run(
@@ -620,22 +526,6 @@ class WinetricksHandler:
component_success = True component_success = True
break break
else: else:
# Special handling for dotnet40 verification issue
if component == "dotnet40" and "ngen.exe not found" in result.stderr:
self.logger.warning("dotnet40 verification warning (expected in Steam Proton)")
# Check winetricks.log for actual success
log_path = os.path.join(wineprefix, 'winetricks.log')
if os.path.exists(log_path):
try:
with open(log_path, 'r') as f:
if 'dotnet40' in f.read():
self.logger.info("dotnet40 confirmed in winetricks.log")
component_success = True
break
except Exception as e:
self.logger.warning(f"Could not read winetricks.log: {e}")
self.logger.error(f"{component} failed (attempt {attempt}): {result.stderr.strip()}") self.logger.error(f"{component} failed (attempt {attempt}): {result.stderr.strip()}")
self.logger.debug(f"Full stdout for {component}: {result.stdout.strip()}") self.logger.debug(f"Full stdout for {component}: {result.stdout.strip()}")
@@ -647,121 +537,10 @@ class WinetricksHandler:
return False return False
self.logger.info("All components installed successfully using separate sessions") self.logger.info("All components installed successfully using separate sessions")
# Set Windows 10 mode after all component installation (matches legacy script timing) # Set Windows 10 mode after all component installation
self._set_windows_10_mode(wineprefix, env.get('WINE', '')) self._set_windows_10_mode(wineprefix, env.get('WINE', ''))
return True return True
def _install_components_hybrid_approach(self, components: list, wineprefix: str, game_var: str, use_winetricks: bool = True) -> bool:
"""
Hybrid approach: Install legacy .NET Framework versions with protontricks (reliable),
then install remaining components with winetricks OR protontricks based on user preference.
Args:
components: List of all components to install
wineprefix: Wine prefix path
game_var: Game variable for AppID detection
use_winetricks: Whether to use winetricks for non-legacy components
Returns:
bool: True if all installations succeeded, False otherwise
"""
self.logger.info("Starting hybrid installation approach")
# Legacy .NET Framework versions that need protontricks
legacy_dotnet_versions = ['dotnet40', 'dotnet472', 'dotnet48']
# Separate legacy .NET (protontricks) from other components (winetricks)
protontricks_components = [comp for comp in components if comp in legacy_dotnet_versions]
other_components = [comp for comp in components if comp not in legacy_dotnet_versions]
self.logger.info(f"Protontricks components: {protontricks_components}")
self.logger.info(f"Other components: {other_components}")
# Step 1: Install legacy .NET Framework versions with protontricks if present
if protontricks_components:
self.logger.info(f"Installing legacy .NET versions {protontricks_components} using protontricks...")
if not self._install_legacy_dotnet_with_protontricks(protontricks_components, wineprefix, game_var):
self.logger.error(f"Failed to install {protontricks_components} with protontricks")
return False
self.logger.info(f"{protontricks_components} installation completed successfully with protontricks")
# Step 2: Install remaining components if any
if other_components:
if use_winetricks:
self.logger.info(f"Installing remaining components with winetricks: {other_components}")
# Use existing winetricks logic for other components
env = self._prepare_winetricks_environment(wineprefix)
if not env:
return False
return self._install_components_with_winetricks(other_components, wineprefix, env)
else:
self.logger.info(f"Installing remaining components with protontricks: {other_components}")
return self._install_components_protontricks_only(other_components, wineprefix, game_var)
self.logger.info("Hybrid component installation completed successfully")
# Set Windows 10 mode after all component installation (matches legacy script timing)
wine_binary = self._get_wine_binary_for_prefix(wineprefix)
self._set_windows_10_mode(wineprefix, wine_binary)
return True
def _install_legacy_dotnet_with_protontricks(self, legacy_components: list, wineprefix: str, game_var: str) -> bool:
"""
Install legacy .NET Framework versions using protontricks (known to work more reliably).
Args:
legacy_components: List of legacy .NET components to install (dotnet40, dotnet472, dotnet48)
wineprefix: Wine prefix path
game_var: Game variable for AppID detection
Returns:
bool: True if installation succeeded, False otherwise
"""
try:
# Extract AppID from wineprefix path (e.g., /path/to/compatdata/123456789/pfx -> 123456789)
appid = None
if 'compatdata' in wineprefix:
# Standard Steam compatdata structure
path_parts = Path(wineprefix).parts
for i, part in enumerate(path_parts):
if part == 'compatdata' and i + 1 < len(path_parts):
potential_appid = path_parts[i + 1]
if potential_appid.isdigit():
appid = potential_appid
break
if not appid:
self.logger.error(f"Could not extract AppID from wineprefix path: {wineprefix}")
return False
self.logger.info(f"Using AppID {appid} for protontricks dotnet40 installation")
# Import and use protontricks handler
from .protontricks_handler import ProtontricksHandler
# Determine if we're on Steam Deck (for protontricks handler)
steamdeck = os.path.exists('/home/deck')
protontricks_handler = ProtontricksHandler(steamdeck, logger=self.logger)
# Detect protontricks availability
if not protontricks_handler.detect_protontricks():
self.logger.error(f"Protontricks not available for legacy .NET installation: {legacy_components}")
return False
# Install legacy .NET components using protontricks
success = protontricks_handler.install_wine_components(appid, game_var, legacy_components)
if success:
self.logger.info(f"Legacy .NET components {legacy_components} installed successfully with protontricks")
return True
else:
self.logger.error(f"Legacy .NET components {legacy_components} installation failed with protontricks")
return False
except Exception as e:
self.logger.error(f"Error installing legacy .NET components {legacy_components} with protontricks: {e}", exc_info=True)
return False
def _prepare_winetricks_environment(self, wineprefix: str) -> Optional[dict]: def _prepare_winetricks_environment(self, wineprefix: str) -> Optional[dict]:
""" """
Prepare the environment for winetricks installation. Prepare the environment for winetricks installation.
@@ -799,9 +578,15 @@ class WinetricksHandler:
wine_binary = ge_proton_wine wine_binary = ge_proton_wine
if not wine_binary: if not wine_binary:
if user_proton_path == 'auto':
self.logger.info("Auto-detecting Proton (user selected 'auto')")
best_proton = WineUtils.select_best_proton() best_proton = WineUtils.select_best_proton()
if best_proton: if best_proton:
wine_binary = WineUtils.find_proton_binary(best_proton['name']) wine_binary = WineUtils.find_proton_binary(best_proton['name'])
else:
# User selected a specific Proton but validation failed
self.logger.error(f"Cannot prepare winetricks environment: configured Proton not found: {user_proton_path}")
return None
if not wine_binary or not (os.path.exists(wine_binary) and os.access(wine_binary, os.X_OK)): if not wine_binary or not (os.path.exists(wine_binary) and os.access(wine_binary, os.X_OK)):
self.logger.error(f"Cannot prepare winetricks environment: No compatible Proton found") self.logger.error(f"Cannot prepare winetricks environment: No compatible Proton found")
@@ -915,11 +700,16 @@ class WinetricksHandler:
def _install_components_protontricks_only(self, components: list, wineprefix: str, game_var: str) -> bool: def _install_components_protontricks_only(self, components: list, wineprefix: str, game_var: str) -> bool:
""" """
Legacy approach: Install all components using protontricks only. Install all components using protontricks only.
This matches the behavior of the original bash script. This matches the behavior of the original bash script.
Args:
components: List of components to install
wineprefix: Path to wine prefix
game_var: Game variable name
""" """
try: try:
self.logger.info(f"Installing all components with protontricks (legacy method): {components}") self.logger.info(f"Installing all components with system protontricks: {components}")
# Import protontricks handler # Import protontricks handler
from ..handlers.protontricks_handler import ProtontricksHandler from ..handlers.protontricks_handler import ProtontricksHandler
@@ -1013,11 +803,17 @@ class WinetricksHandler:
elif os.path.exists(ge_proton_wine): elif os.path.exists(ge_proton_wine):
wine_binary = ge_proton_wine wine_binary = ge_proton_wine
# Fall back to auto-detection if user selection failed or is 'auto' # Only auto-detect if user explicitly chose 'auto'
if not wine_binary: if not wine_binary:
if user_proton_path == 'auto':
self.logger.info("Auto-detecting Proton (user selected 'auto')")
best_proton = WineUtils.select_best_proton() best_proton = WineUtils.select_best_proton()
if best_proton: if best_proton:
wine_binary = WineUtils.find_proton_binary(best_proton['name']) wine_binary = WineUtils.find_proton_binary(best_proton['name'])
else:
# User selected a specific Proton but validation failed
self.logger.error(f"Configured Proton not found: {user_proton_path}")
return ""
return wine_binary if wine_binary else "" return wine_binary if wine_binary else ""
except Exception as e: except Exception as e:

View File

@@ -68,6 +68,8 @@ class SystemInfo:
steam_root: Optional[Path] = None steam_root: Optional[Path] = None
steam_user_id: Optional[str] = None steam_user_id: Optional[str] = None
proton_version: Optional[str] = None proton_version: Optional[str] = None
is_flatpak_steam: bool = False
is_native_steam: bool = False
def to_dict(self) -> Dict[str, Any]: def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary.""" """Convert to dictionary."""
@@ -76,4 +78,6 @@ class SystemInfo:
'steam_root': str(self.steam_root) if self.steam_root else None, 'steam_root': str(self.steam_root) if self.steam_root else None,
'steam_user_id': self.steam_user_id, 'steam_user_id': self.steam_user_id,
'proton_version': self.proton_version, 'proton_version': self.proton_version,
'is_flatpak_steam': self.is_flatpak_steam,
'is_native_steam': self.is_native_steam,
} }

View File

@@ -0,0 +1,216 @@
"""
Data models for modlist metadata from jackify-engine JSON output.
These models match the JSON schema documented in MODLIST_METADATA_IMPLEMENTATION.md
"""
from dataclasses import dataclass, field
from typing import List, Optional
from datetime import datetime
@dataclass
class ModlistImages:
"""Image URLs for modlist (small thumbnail and large banner)"""
small: str
large: str
@dataclass
class ModlistLinks:
"""External links associated with the modlist"""
image: Optional[str] = None
readme: Optional[str] = None
download: Optional[str] = None
discordURL: Optional[str] = None
websiteURL: Optional[str] = None
@dataclass
class ModlistSizes:
"""Size information for modlist downloads and installation"""
downloadSize: int
downloadSizeFormatted: str
installSize: int
installSizeFormatted: str
totalSize: int
totalSizeFormatted: str
numberOfArchives: int
numberOfInstalledFiles: int
@dataclass
class ModlistValidation:
"""Validation status from Wabbajack build server (optional)"""
failed: int = 0
passed: int = 0
updating: int = 0
mirrored: int = 0
modListIsMissing: bool = False
hasFailures: bool = False
@dataclass
class ModlistMetadata:
"""Complete modlist metadata from jackify-engine"""
# Basic information
title: str
description: str
author: str
maintainers: List[str]
namespacedName: str
repositoryName: str
machineURL: str
# Game information
game: str
gameHumanFriendly: str
# Status flags
official: bool
nsfw: bool
utilityList: bool
forceDown: bool
imageContainsTitle: bool
# Version information
version: Optional[str] = None
displayVersionOnlyInInstallerView: bool = False
# Dates
dateCreated: Optional[str] = None # ISO8601 format
dateUpdated: Optional[str] = None # ISO8601 format
# Categorization
tags: List[str] = field(default_factory=list)
# Nested objects
links: Optional[ModlistLinks] = None
sizes: Optional[ModlistSizes] = None
images: Optional[ModlistImages] = None
# Optional data (only if flags specified)
validation: Optional[ModlistValidation] = None
mods: List[str] = field(default_factory=list)
def is_available(self) -> bool:
"""Check if modlist is available for installation"""
if self.forceDown:
return False
if self.validation and self.validation.hasFailures:
return False
return True
def is_broken(self) -> bool:
"""Check if modlist has validation failures"""
return self.validation.hasFailures if self.validation else False
def get_date_updated_datetime(self) -> Optional[datetime]:
"""Parse dateUpdated string to datetime object"""
if not self.dateUpdated:
return None
try:
return datetime.fromisoformat(self.dateUpdated.replace('Z', '+00:00'))
except (ValueError, AttributeError):
return None
def get_date_created_datetime(self) -> Optional[datetime]:
"""Parse dateCreated string to datetime object"""
if not self.dateCreated:
return None
try:
return datetime.fromisoformat(self.dateCreated.replace('Z', '+00:00'))
except (ValueError, AttributeError):
return None
@dataclass
class ModlistMetadataResponse:
"""Root response object from jackify-engine list-modlists --json"""
metadataVersion: str
timestamp: str # ISO8601 format
count: int
modlists: List[ModlistMetadata]
def get_timestamp_datetime(self) -> Optional[datetime]:
"""Parse timestamp string to datetime object"""
try:
return datetime.fromisoformat(self.timestamp.replace('Z', '+00:00'))
except (ValueError, AttributeError):
return None
def filter_by_game(self, game: str) -> List[ModlistMetadata]:
"""Filter modlists by game name"""
return [m for m in self.modlists if m.game.lower() == game.lower()]
def filter_available_only(self) -> List[ModlistMetadata]:
"""Filter to only available (non-broken, non-forced-down) modlists"""
return [m for m in self.modlists if m.is_available()]
def filter_by_tag(self, tag: str) -> List[ModlistMetadata]:
"""Filter modlists by tag"""
return [m for m in self.modlists if tag.lower() in [t.lower() for t in m.tags]]
def filter_official_only(self) -> List[ModlistMetadata]:
"""Filter to only official modlists"""
return [m for m in self.modlists if m.official]
def search(self, query: str) -> List[ModlistMetadata]:
"""Search modlists by title, description, or author"""
query_lower = query.lower()
return [
m for m in self.modlists
if query_lower in m.title.lower()
or query_lower in m.description.lower()
or query_lower in m.author.lower()
]
def parse_modlist_metadata_from_dict(data: dict) -> ModlistMetadata:
"""Parse a modlist metadata dictionary into ModlistMetadata object"""
# Parse nested objects
images = ModlistImages(**data['images']) if 'images' in data and data['images'] else None
links = ModlistLinks(**data['links']) if 'links' in data and data['links'] else None
sizes = ModlistSizes(**data['sizes']) if 'sizes' in data and data['sizes'] else None
validation = ModlistValidation(**data['validation']) if 'validation' in data and data['validation'] else None
# Create ModlistMetadata with nested objects
metadata = ModlistMetadata(
title=data['title'],
description=data['description'],
author=data['author'],
maintainers=data.get('maintainers', []),
namespacedName=data['namespacedName'],
repositoryName=data['repositoryName'],
machineURL=data['machineURL'],
game=data['game'],
gameHumanFriendly=data['gameHumanFriendly'],
official=data['official'],
nsfw=data['nsfw'],
utilityList=data['utilityList'],
forceDown=data['forceDown'],
imageContainsTitle=data['imageContainsTitle'],
version=data.get('version'),
displayVersionOnlyInInstallerView=data.get('displayVersionOnlyInInstallerView', False),
dateCreated=data.get('dateCreated'),
dateUpdated=data.get('dateUpdated'),
tags=data.get('tags', []),
links=links,
sizes=sizes,
images=images,
validation=validation,
mods=data.get('mods', [])
)
return metadata
def parse_modlist_metadata_response(data: dict) -> ModlistMetadataResponse:
"""Parse the full JSON response from jackify-engine into ModlistMetadataResponse"""
modlists = [parse_modlist_metadata_from_dict(m) for m in data.get('modlists', [])]
return ModlistMetadataResponse(
metadataVersion=data['metadataVersion'],
timestamp=data['timestamp'],
count=data['count'],
modlists=modlists
)

View File

@@ -29,9 +29,10 @@ class AutomatedPrefixService:
and direct Proton wrapper integration. and direct Proton wrapper integration.
""" """
def __init__(self): def __init__(self, system_info=None):
self.scripts_dir = Path.home() / "Jackify/scripts" self.scripts_dir = Path.home() / "Jackify/scripts"
self.scripts_dir.mkdir(parents=True, exist_ok=True) self.scripts_dir.mkdir(parents=True, exist_ok=True)
self.system_info = system_info
# Use shared timing for consistency across services # Use shared timing for consistency across services
def _get_progress_timestamp(self): def _get_progress_timestamp(self):
@@ -552,7 +553,9 @@ exit"""
""" """
try: try:
from .steam_restart_service import robust_steam_restart from .steam_restart_service import robust_steam_restart
return robust_steam_restart(progress_callback=None, timeout=60) # Use system_info if available (backward compatibility)
system_info = getattr(self, 'system_info', None)
return robust_steam_restart(progress_callback=None, timeout=60, system_info=system_info)
except Exception as e: except Exception as e:
logger.error(f"Error restarting Steam: {e}") logger.error(f"Error restarting Steam: {e}")
return False return False
@@ -930,21 +933,34 @@ echo Prefix creation complete.
if 'CompatToolMapping' not in config_data['Software']['Valve']['Steam']: if 'CompatToolMapping' not in config_data['Software']['Valve']['Steam']:
config_data['Software']['Valve']['Steam']['CompatToolMapping'] = {} config_data['Software']['Valve']['Steam']['CompatToolMapping'] = {}
# Set the Proton version for this AppID # Set the Proton version for this AppID using Steam's expected format
config_data['Software']['Valve']['Steam']['CompatToolMapping'][str(appid)] = proton_version # Steam requires a dict with 'name', 'config', and 'priority' keys
config_data['Software']['Valve']['Steam']['CompatToolMapping'][str(appid)] = {
'name': proton_version,
'config': '',
'priority': '250'
}
# Write back to file (text format) # Write back to file (text format)
with open(config_path, 'w') as f: with open(config_path, 'w') as f:
vdf.dump(config_data, f) vdf.dump(config_data, f)
# Ensure file is fully written to disk before Steam restart
import os
os.fsync(f.fileno()) if hasattr(f, 'fileno') else None
logger.info(f"Set Proton version {proton_version} for AppID {appid}") logger.info(f"Set Proton version {proton_version} for AppID {appid}")
debug_print(f"[DEBUG] Set Proton version {proton_version} for AppID {appid} in config.vdf") debug_print(f"[DEBUG] Set Proton version {proton_version} for AppID {appid} in config.vdf")
# Small delay to ensure filesystem write completes
import time
time.sleep(0.5)
# Verify it was set correctly # Verify it was set correctly
with open(config_path, 'r') as f: with open(config_path, 'r') as f:
verify_data = vdf.load(f) verify_data = vdf.load(f)
actual_value = verify_data.get('Software', {}).get('Valve', {}).get('Steam', {}).get('CompatToolMapping', {}).get(str(appid)) compat_mapping = verify_data.get('Software', {}).get('Valve', {}).get('Steam', {}).get('CompatToolMapping', {}).get(str(appid))
debug_print(f"[DEBUG] Verification: AppID {appid} -> {actual_value}") debug_print(f"[DEBUG] Verification: AppID {appid} -> {compat_mapping}")
return True return True
@@ -1045,6 +1061,17 @@ echo Prefix creation complete.
env = os.environ.copy() env = os.environ.copy()
env['STEAM_COMPAT_DATA_PATH'] = str(prefix_path) env['STEAM_COMPAT_DATA_PATH'] = str(prefix_path)
env['STEAM_COMPAT_APP_ID'] = str(positive_appid) # Use positive AppID for environment env['STEAM_COMPAT_APP_ID'] = str(positive_appid) # Use positive AppID for environment
# Determine correct Steam root based on installation type
from ..handlers.path_handler import PathHandler
path_handler = PathHandler()
steam_library = path_handler.find_steam_library()
if steam_library and steam_library.name == "common":
# Extract Steam root from library path: .../Steam/steamapps/common -> .../Steam
steam_root = steam_library.parent.parent
env['STEAM_COMPAT_CLIENT_INSTALL_PATH'] = str(steam_root)
else:
# Fallback to legacy path if detection fails
env['STEAM_COMPAT_CLIENT_INSTALL_PATH'] = str(Path.home() / ".local/share/Steam") env['STEAM_COMPAT_CLIENT_INSTALL_PATH'] = str(Path.home() / ".local/share/Steam")
# Build the command # Build the command
@@ -1109,7 +1136,10 @@ echo Prefix creation complete.
def _get_compatdata_path_for_appid(self, appid: int) -> Optional[Path]: def _get_compatdata_path_for_appid(self, appid: int) -> Optional[Path]:
""" """
Get the compatdata path for a given AppID using existing Jackify functions. Get the compatdata path for a given AppID.
First tries to find existing compatdata, then constructs path from libraryfolders.vdf
for creating new prefixes.
Args: Args:
appid: The AppID to get the path for appid: The AppID to get the path for
@@ -1117,22 +1147,32 @@ echo Prefix creation complete.
Returns: Returns:
Path to the compatdata directory, or None if not found Path to the compatdata directory, or None if not found
""" """
# Use existing Jackify path detection
from ..handlers.path_handler import PathHandler from ..handlers.path_handler import PathHandler
# First, try to find existing compatdata
compatdata_path = PathHandler.find_compat_data(str(appid)) compatdata_path = PathHandler.find_compat_data(str(appid))
if compatdata_path: if compatdata_path:
return compatdata_path return compatdata_path
# Fallback: construct the path manually # Prefix doesn't exist yet - determine where to create it from libraryfolders.vdf
possible_bases = [ library_paths = PathHandler.get_all_steam_library_paths()
if library_paths:
# Use the first library (typically the default library)
# Construct compatdata path: library_path/steamapps/compatdata/appid
first_library = library_paths[0]
compatdata_base = first_library / "steamapps" / "compatdata"
return compatdata_base / str(appid)
# Only fallback if VDF parsing completely fails
logger.warning("Could not get library paths from libraryfolders.vdf, using fallback locations")
fallback_bases = [
Path.home() / ".var/app/com.valvesoftware.Steam/data/Steam/steamapps/compatdata",
Path.home() / ".var/app/com.valvesoftware.Steam/.local/share/Steam/steamapps/compatdata",
Path.home() / ".steam/steam/steamapps/compatdata", Path.home() / ".steam/steam/steamapps/compatdata",
Path.home() / ".local/share/Steam/steamapps/compatdata", Path.home() / ".local/share/Steam/steamapps/compatdata",
Path.home() / ".var/app/com.valvesoftware.Steam/home/.steam/steam/steamapps/compatdata",
Path.home() / ".var/app/com.valvesoftware.Steam/home/.local/share/Steam/steamapps/compatdata",
] ]
for base_path in possible_bases: for base_path in fallback_bases:
if base_path.is_dir(): if base_path.is_dir():
return base_path / str(appid) return base_path / str(appid)
@@ -2666,10 +2706,41 @@ echo Prefix creation complete.
True if successful, False otherwise True if successful, False otherwise
""" """
try: try:
# Determine Steam locations based on installation type
from ..handlers.path_handler import PathHandler
path_handler = PathHandler()
all_libraries = path_handler.get_all_steam_library_paths()
# Check if we have Flatpak Steam by looking for .var/app/com.valvesoftware.Steam in library paths
is_flatpak_steam = any('.var/app/com.valvesoftware.Steam' in str(lib) for lib in all_libraries)
if is_flatpak_steam and all_libraries:
# Flatpak Steam: Use the actual library root from libraryfolders.vdf
# Compatdata should be in the library root, not the client root
flatpak_library_root = all_libraries[0] # Use first library (typically the default)
flatpak_client_root = flatpak_library_root.parent.parent / ".steam/steam"
if not flatpak_library_root.is_dir():
logger.error(
f"Flatpak Steam library root does not exist: {flatpak_library_root}"
)
return False
steam_root = flatpak_client_root if flatpak_client_root.is_dir() else flatpak_library_root
# CRITICAL: compatdata must be in the library root, not client root
compatdata_dir = flatpak_library_root / "steamapps/compatdata"
proton_common_dir = flatpak_library_root / "steamapps/common"
else:
# Native Steam (or unknown): fall back to legacy ~/.steam/steam layout
steam_root = Path.home() / ".steam/steam" steam_root = Path.home() / ".steam/steam"
compatdata_dir = steam_root / "steamapps/compatdata" compatdata_dir = steam_root / "steamapps/compatdata"
proton_common_dir = steam_root / "steamapps/common" proton_common_dir = steam_root / "steamapps/common"
# Ensure compatdata root exists and is a directory we actually want to use
if not compatdata_dir.is_dir():
logger.error(f"Compatdata root does not exist: {compatdata_dir}. Aborting prefix creation.")
return False
# Find a Proton wrapper to use # Find a Proton wrapper to use
proton_path = self._find_proton_binary(proton_common_dir) proton_path = self._find_proton_binary(proton_common_dir)
if not proton_path: if not proton_path:
@@ -2686,9 +2757,9 @@ echo Prefix creation complete.
env['WINEDEBUG'] = '-all' env['WINEDEBUG'] = '-all'
env['WINEDLLOVERRIDES'] = 'msdia80.dll=n;conhost.exe=d;cmd.exe=d' env['WINEDLLOVERRIDES'] = 'msdia80.dll=n;conhost.exe=d;cmd.exe=d'
# Create the compatdata directory # Create the compatdata directory for this AppID (but never the whole tree)
compat_dir = compatdata_dir / str(abs(appid)) compat_dir = compatdata_dir / str(abs(appid))
compat_dir.mkdir(parents=True, exist_ok=True) compat_dir.mkdir(exist_ok=True)
logger.info(f"Creating Proton prefix for AppID {appid}") logger.info(f"Creating Proton prefix for AppID {appid}")
logger.info(f"STEAM_COMPAT_CLIENT_INSTALL_PATH={env['STEAM_COMPAT_CLIENT_INSTALL_PATH']}") logger.info(f"STEAM_COMPAT_CLIENT_INSTALL_PATH={env['STEAM_COMPAT_CLIENT_INSTALL_PATH']}")

View File

@@ -0,0 +1,474 @@
"""
Service for fetching and managing modlist metadata for the gallery view.
Handles jackify-engine integration, caching, and image management.
"""
import json
import subprocess
import time
import threading
from pathlib import Path
from typing import Optional, List, Dict
from datetime import datetime, timedelta
import urllib.request
from jackify.backend.models.modlist_metadata import (
ModlistMetadataResponse,
ModlistMetadata,
parse_modlist_metadata_response
)
from jackify.backend.core.modlist_operations import get_jackify_engine_path
from jackify.backend.handlers.config_handler import ConfigHandler
from jackify.shared.paths import get_jackify_data_dir
class ModlistGalleryService:
"""Service for fetching and caching modlist metadata from jackify-engine"""
CACHE_VALIDITY_DAYS = 7 # Refresh cache after 7 days
# CRITICAL: Thread lock to prevent concurrent engine calls that could cause recursive spawning
_engine_call_lock = threading.Lock()
def __init__(self):
"""Initialize the gallery service"""
self.config_handler = ConfigHandler()
# Cache directories in Jackify Data Directory
jackify_data_dir = get_jackify_data_dir()
self.CACHE_DIR = jackify_data_dir / "modlist-cache" / "metadata"
self.IMAGE_CACHE_DIR = jackify_data_dir / "modlist-cache" / "images"
self.METADATA_CACHE_FILE = self.CACHE_DIR / "modlist_metadata.json"
self._ensure_cache_dirs()
# Tag metadata caches (avoid refetching per render)
self._tag_mappings_cache: Optional[Dict[str, str]] = None
self._tag_mapping_lookup: Optional[Dict[str, str]] = None
self._allowed_tags_cache: Optional[set] = None
self._allowed_tags_lookup: Optional[Dict[str, str]] = None
def _ensure_cache_dirs(self):
"""Create cache directories if they don't exist"""
self.CACHE_DIR.mkdir(parents=True, exist_ok=True)
self.IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True)
def fetch_modlist_metadata(
self,
include_validation: bool = True,
include_search_index: bool = False,
sort_by: str = "title",
force_refresh: bool = False
) -> Optional[ModlistMetadataResponse]:
"""
Fetch modlist metadata from jackify-engine.
Args:
include_validation: Include validation status (slower)
include_search_index: Include mod search index (slower)
sort_by: Sort order (title, size, date)
force_refresh: Force refresh even if cache is valid
Returns:
ModlistMetadataResponse or None if fetch fails
"""
# Check cache first unless force refresh
# If include_search_index is True, check if cache has mods before using it
if not force_refresh:
cached = self._load_from_cache()
if cached and self._is_cache_valid():
# If we need search index, check if cached data has mods
if include_search_index:
# Check if at least one modlist has mods (indicates cache was built with search index)
has_mods = any(hasattr(m, 'mods') and m.mods for m in cached.modlists)
if has_mods:
return cached # Cache has mods, use it
# Cache doesn't have mods, need to fetch fresh
else:
return cached # Don't need search index, use cache
# Fetch fresh data from jackify-engine
try:
metadata = self._fetch_from_engine(
include_validation=include_validation,
include_search_index=include_search_index,
sort_by=sort_by
)
if metadata:
self._save_to_cache(metadata)
return metadata
except Exception as e:
print(f"Error fetching modlist metadata: {e}")
# Fall back to cache if available
return self._load_from_cache()
def _fetch_from_engine(
self,
include_validation: bool,
include_search_index: bool,
sort_by: str
) -> Optional[ModlistMetadataResponse]:
"""Call jackify-engine to fetch modlist metadata"""
# CRITICAL: Use thread lock to prevent concurrent engine calls
# Multiple simultaneous calls could cause recursive spawning issues
with self._engine_call_lock:
# CRITICAL: Get engine path BEFORE cleaning environment
# get_jackify_engine_path() may need APPDIR to locate the engine
engine_path = get_jackify_engine_path()
if not engine_path:
raise FileNotFoundError("jackify-engine not found")
# Build command
cmd = [str(engine_path), "list-modlists", "--json", "--sort-by", sort_by]
if include_validation:
cmd.append("--include-validation-status")
if include_search_index:
cmd.append("--include-search-index")
# Execute command
# CRITICAL: Use centralized clean environment to prevent AppImage recursive spawning
# This must happen AFTER engine path resolution
from jackify.backend.handlers.subprocess_utils import get_clean_subprocess_env
clean_env = get_clean_subprocess_env()
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=300, # 5 minute timeout for large data
env=clean_env
)
if result.returncode != 0:
raise RuntimeError(f"jackify-engine failed: {result.stderr}")
# Parse JSON response - skip progress messages and extract JSON
# jackify-engine prints progress to stdout before the JSON
stdout = result.stdout.strip()
# Find the start of JSON (first '{' on its own line)
lines = stdout.split('\n')
json_start = 0
for i, line in enumerate(lines):
if line.strip().startswith('{'):
json_start = i
break
json_text = '\n'.join(lines[json_start:])
data = json.loads(json_text)
return parse_modlist_metadata_response(data)
def _load_from_cache(self) -> Optional[ModlistMetadataResponse]:
"""Load metadata from cache file"""
if not self.METADATA_CACHE_FILE.exists():
return None
try:
with open(self.METADATA_CACHE_FILE, 'r', encoding='utf-8') as f:
data = json.load(f)
return parse_modlist_metadata_response(data)
except Exception as e:
print(f"Error loading cache: {e}")
return None
def _save_to_cache(self, metadata: ModlistMetadataResponse):
"""Save metadata to cache file"""
try:
# Convert to dict for JSON serialization
data = {
'metadataVersion': metadata.metadataVersion,
'timestamp': metadata.timestamp,
'count': metadata.count,
'modlists': [self._metadata_to_dict(m) for m in metadata.modlists]
}
with open(self.METADATA_CACHE_FILE, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2)
except Exception as e:
print(f"Error saving cache: {e}")
def _metadata_to_dict(self, metadata: ModlistMetadata) -> dict:
"""Convert ModlistMetadata to dict for JSON serialization"""
result = {
'title': metadata.title,
'description': metadata.description,
'author': metadata.author,
'maintainers': metadata.maintainers,
'namespacedName': metadata.namespacedName,
'repositoryName': metadata.repositoryName,
'machineURL': metadata.machineURL,
'game': metadata.game,
'gameHumanFriendly': metadata.gameHumanFriendly,
'official': metadata.official,
'nsfw': metadata.nsfw,
'utilityList': metadata.utilityList,
'forceDown': metadata.forceDown,
'imageContainsTitle': metadata.imageContainsTitle,
'version': metadata.version,
'displayVersionOnlyInInstallerView': metadata.displayVersionOnlyInInstallerView,
'dateCreated': metadata.dateCreated,
'dateUpdated': metadata.dateUpdated,
'tags': metadata.tags,
'mods': metadata.mods
}
if metadata.images:
result['images'] = {
'small': metadata.images.small,
'large': metadata.images.large
}
if metadata.links:
result['links'] = {
'image': metadata.links.image,
'readme': metadata.links.readme,
'download': metadata.links.download,
'discordURL': metadata.links.discordURL,
'websiteURL': metadata.links.websiteURL
}
if metadata.sizes:
result['sizes'] = {
'downloadSize': metadata.sizes.downloadSize,
'downloadSizeFormatted': metadata.sizes.downloadSizeFormatted,
'installSize': metadata.sizes.installSize,
'installSizeFormatted': metadata.sizes.installSizeFormatted,
'totalSize': metadata.sizes.totalSize,
'totalSizeFormatted': metadata.sizes.totalSizeFormatted,
'numberOfArchives': metadata.sizes.numberOfArchives,
'numberOfInstalledFiles': metadata.sizes.numberOfInstalledFiles
}
if metadata.validation:
result['validation'] = {
'failed': metadata.validation.failed,
'passed': metadata.validation.passed,
'updating': metadata.validation.updating,
'mirrored': metadata.validation.mirrored,
'modListIsMissing': metadata.validation.modListIsMissing,
'hasFailures': metadata.validation.hasFailures
}
return result
def _is_cache_valid(self) -> bool:
"""Check if cache is still valid based on age"""
if not self.METADATA_CACHE_FILE.exists():
return False
# Check file modification time
mtime = datetime.fromtimestamp(self.METADATA_CACHE_FILE.stat().st_mtime)
age = datetime.now() - mtime
return age < timedelta(days=self.CACHE_VALIDITY_DAYS)
def download_images(
self,
game_filter: Optional[str] = None,
size: str = "both",
overwrite: bool = False
) -> bool:
"""
Download modlist images to cache using jackify-engine.
Args:
game_filter: Filter by game name (None = all games)
size: Image size to download (small, large, both)
overwrite: Overwrite existing images
Returns:
True if successful, False otherwise
"""
# Build command (engine path will be resolved inside lock)
cmd = [
"placeholder", # Will be replaced with actual engine path
"download-modlist-images",
"--output", str(self.IMAGE_CACHE_DIR),
"--size", size
]
if game_filter:
cmd.extend(["--game", game_filter])
if overwrite:
cmd.append("--overwrite")
# Execute command
try:
# CRITICAL: Use thread lock to prevent concurrent engine calls
with self._engine_call_lock:
# CRITICAL: Get engine path BEFORE cleaning environment
# get_jackify_engine_path() may need APPDIR to locate the engine
engine_path = get_jackify_engine_path()
if not engine_path:
return False
# Update cmd with resolved engine path
cmd[0] = str(engine_path)
# CRITICAL: Use centralized clean environment to prevent AppImage recursive spawning
# This must happen AFTER engine path resolution
from jackify.backend.handlers.subprocess_utils import get_clean_subprocess_env
clean_env = get_clean_subprocess_env()
result = subprocess.run(
cmd,
capture_output=True,
text=True,
timeout=3600, # 1 hour timeout for downloads
env=clean_env
)
return result.returncode == 0
except Exception as e:
print(f"Error downloading images: {e}")
return False
def get_cached_image_path(self, metadata: ModlistMetadata, size: str = "large") -> Optional[Path]:
"""
Get path to cached image for a modlist (only if it exists).
Args:
metadata: Modlist metadata
size: Image size (small or large)
Returns:
Path to cached image or None if not cached
"""
filename = f"{metadata.machineURL}_{size}.webp"
image_path = self.IMAGE_CACHE_DIR / metadata.repositoryName / filename
if image_path.exists():
return image_path
return None
def get_image_cache_path(self, metadata: ModlistMetadata, size: str = "large") -> Path:
"""
Get path where image should be cached (always returns path, even if file doesn't exist).
Args:
metadata: Modlist metadata
size: Image size (small or large)
Returns:
Path where image should be cached
"""
filename = f"{metadata.machineURL}_{size}.webp"
return self.IMAGE_CACHE_DIR / metadata.repositoryName / filename
def get_image_url(self, metadata: ModlistMetadata, size: str = "large") -> Optional[str]:
"""
Get image URL for a modlist.
Args:
metadata: Modlist metadata
size: Image size (small or large)
Returns:
Image URL or None if images not available
"""
if not metadata.images:
return None
return metadata.images.large if size == "large" else metadata.images.small
def clear_cache(self):
"""Clear all cached metadata and images"""
if self.METADATA_CACHE_FILE.exists():
self.METADATA_CACHE_FILE.unlink()
# Clear image cache
if self.IMAGE_CACHE_DIR.exists():
import shutil
shutil.rmtree(self.IMAGE_CACHE_DIR)
self.IMAGE_CACHE_DIR.mkdir(parents=True, exist_ok=True)
def get_installed_modlists(self) -> List[str]:
"""
Get list of installed modlist machine URLs.
Returns:
List of machine URLs for installed modlists
"""
# TODO: Integrate with existing modlist database/config
# For now, return empty list - will be implemented when integrated with existing modlist tracking
return []
def is_modlist_installed(self, machine_url: str) -> bool:
"""Check if a modlist is installed"""
return machine_url in self.get_installed_modlists()
def load_tag_mappings(self) -> Dict[str, str]:
"""
Load tag mappings from Wabbajack GitHub repository.
Maps variant tag names to canonical tag names.
Returns:
Dictionary mapping variant tags to canonical tags
"""
url = "https://raw.githubusercontent.com/wabbajack-tools/mod-lists/master/tag_mappings.json"
try:
with urllib.request.urlopen(url, timeout=10) as response:
data = json.loads(response.read().decode('utf-8'))
return data
except Exception as e:
print(f"Warning: Could not load tag mappings: {e}")
return {}
def load_allowed_tags(self) -> set:
"""
Load allowed tags from Wabbajack GitHub repository.
Returns:
Set of allowed tag names (preserving original case)
"""
url = "https://raw.githubusercontent.com/wabbajack-tools/mod-lists/master/allowed_tags.json"
try:
with urllib.request.urlopen(url, timeout=10) as response:
data = json.loads(response.read().decode('utf-8'))
return set(data) # Return as set preserving original case
except Exception as e:
print(f"Warning: Could not load allowed tags: {e}")
return set()
def _ensure_tag_metadata(self):
"""Ensure tag mappings/allowed tags (and lookups) are cached."""
if self._tag_mappings_cache is None:
self._tag_mappings_cache = self.load_tag_mappings()
if self._tag_mapping_lookup is None:
self._tag_mapping_lookup = {k.lower(): v for k, v in self._tag_mappings_cache.items()}
if self._allowed_tags_cache is None:
self._allowed_tags_cache = self.load_allowed_tags()
if self._allowed_tags_lookup is None:
self._allowed_tags_lookup = {tag.lower(): tag for tag in self._allowed_tags_cache}
def normalize_tag_value(self, tag: str) -> str:
"""
Normalize a tag to its canonical display form using Wabbajack mappings.
Returns the normalized tag (original casing preserved when possible).
"""
if not tag:
return ""
self._ensure_tag_metadata()
tag_key = tag.strip().lower()
if not tag_key:
return ""
canonical = self._tag_mapping_lookup.get(tag_key, tag.strip())
# Prefer allowed tag casing if available
return self._allowed_tags_lookup.get(canonical.lower(), canonical)
def normalize_tags_for_display(self, tags: Optional[List[str]]) -> List[str]:
"""Normalize a list of tags for UI display (deduped, canonical casing)."""
if not tags:
return []
self._ensure_tag_metadata()
normalized = []
seen = set()
for tag in tags:
normalized_tag = self.normalize_tag_value(tag)
key = normalized_tag.lower()
if key and key not in seen:
normalized.append(normalized_tag)
seen.add(key)
return normalized

View File

@@ -286,7 +286,17 @@ class ModlistService:
return False return False
# Build command (copied from working code) # Build command (copied from working code)
cmd = [engine_path, 'install'] cmd = [engine_path, 'install', '--show-file-progress']
# Check GPU setting
from jackify.backend.handlers.config_handler import ConfigHandler
config_handler = ConfigHandler()
gpu_enabled = config_handler.get('enable_gpu_texture_conversion', True)
logger.info(f"GPU texture conversion setting: {gpu_enabled}")
if not gpu_enabled:
cmd.append('--no-gpu')
logger.info("Added --no-gpu flag to jackify-engine command")
modlist_value = context.get('modlist_value') modlist_value = context.get('modlist_value')
if modlist_value and modlist_value.endswith('.wabbajack') and os.path.isfile(modlist_value): if modlist_value and modlist_value.endswith('.wabbajack') and os.path.isfile(modlist_value):
cmd += ['-w', modlist_value] cmd += ['-w', modlist_value]
@@ -326,8 +336,10 @@ class ModlistService:
else: else:
output_callback(f"File descriptor limit warning: {message}") output_callback(f"File descriptor limit warning: {message}")
# Subprocess call (copied from working code) # Subprocess call with cleaned environment to prevent AppImage variable inheritance
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=False, env=None, cwd=engine_dir) from jackify.backend.handlers.subprocess_utils import get_clean_subprocess_env
clean_env = get_clean_subprocess_env()
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=False, env=clean_env, cwd=engine_dir)
# Output processing (copied from working code) # Output processing (copied from working code)
buffer = b'' buffer = b''

View File

@@ -481,14 +481,34 @@ class NativeSteamService:
Returns: Returns:
(success, app_id) - Success status and the AppID (success, app_id) - Success status and the AppID
""" """
# Auto-detect best Proton version if none provided # Use Game Proton from settings for shortcut creation (not Install Proton)
if proton_version is None: if proton_version is None:
try: try:
from jackify.backend.core.modlist_operations import _get_user_proton_version from jackify.backend.handlers.config_handler import ConfigHandler
proton_version = _get_user_proton_version() config_handler = ConfigHandler()
logger.info(f"Auto-detected Proton version: {proton_version}") game_proton_path = config_handler.get_game_proton_path()
if game_proton_path and game_proton_path != 'auto':
# User has selected Game Proton - use it
proton_version = os.path.basename(game_proton_path)
# Convert to Steam format
if not proton_version.startswith('GE-Proton'):
proton_version = proton_version.lower().replace(' - ', '_').replace(' ', '_').replace('-', '_')
if not proton_version.startswith('proton'):
proton_version = f"proton_{proton_version}"
logger.info(f"Using Game Proton from settings: {proton_version}")
else:
# Fallback to auto-detect if Game Proton not set
from jackify.backend.handlers.wine_utils import WineUtils
best_proton = WineUtils.select_best_proton()
if best_proton:
proton_version = best_proton['name']
logger.info(f"Auto-detected Game Proton: {proton_version}")
else:
proton_version = "proton_experimental"
logger.warning("Failed to auto-detect Game Proton, falling back to experimental")
except Exception as e: except Exception as e:
logger.warning(f"Failed to auto-detect Proton, falling back to experimental: {e}") logger.warning(f"Failed to get Game Proton, falling back to experimental: {e}")
proton_version = "proton_experimental" proton_version = "proton_experimental"
logger.info(f"Creating shortcut with Proton: '{app_name}' -> '{proton_version}'") logger.info(f"Creating shortcut with Proton: '{app_name}' -> '{proton_version}'")

View File

@@ -0,0 +1,258 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Nexus Authentication Service
Unified service for Nexus authentication using OAuth or API key fallback
"""
import logging
from typing import Optional, Tuple
from .nexus_oauth_service import NexusOAuthService
from ..handlers.oauth_token_handler import OAuthTokenHandler
from .api_key_service import APIKeyService
logger = logging.getLogger(__name__)
class NexusAuthService:
"""
Unified authentication service for Nexus Mods
Handles OAuth 2.0 (preferred) with API key fallback (legacy)
"""
def __init__(self):
"""Initialize authentication service"""
self.oauth_service = NexusOAuthService()
self.token_handler = OAuthTokenHandler()
self.api_key_service = APIKeyService()
logger.debug("NexusAuthService initialized")
def get_auth_token(self) -> Optional[str]:
"""
Get authentication token, preferring OAuth over API key
Returns:
Access token or API key, or None if no authentication available
"""
# Try OAuth first
oauth_token = self._get_oauth_token()
if oauth_token:
logger.debug("Using OAuth token for authentication")
return oauth_token
# Fall back to API key
api_key = self.api_key_service.get_saved_api_key()
if api_key:
logger.debug("Using API key for authentication (OAuth not available)")
return api_key
logger.warning("No authentication available (neither OAuth nor API key)")
return None
def _get_oauth_token(self) -> Optional[str]:
"""
Get OAuth access token, refreshing if needed
Returns:
Valid access token or None
"""
# Check if we have a stored token
if not self.token_handler.has_token():
logger.debug("No OAuth token stored")
return None
# Check if token is expired (15 minute buffer for long installs)
if self.token_handler.is_token_expired(buffer_minutes=15):
logger.info("OAuth token expiring soon, attempting refresh")
# Try to refresh
refresh_token = self.token_handler.get_refresh_token()
if refresh_token:
new_token_data = self.oauth_service.refresh_token(refresh_token)
if new_token_data:
# Save refreshed token
self.token_handler.save_token({'oauth': new_token_data})
logger.info("OAuth token refreshed successfully")
return new_token_data.get('access_token')
else:
logger.warning("Token refresh failed, OAuth token invalid")
# Delete invalid token
self.token_handler.delete_token()
return None
else:
logger.warning("No refresh token available")
return None
# Token is valid, return it
return self.token_handler.get_access_token()
def is_authenticated(self) -> bool:
"""
Check if user is authenticated via OAuth or API key
Returns:
True if authenticated
"""
return self.get_auth_token() is not None
def get_auth_method(self) -> Optional[str]:
"""
Get current authentication method
Returns:
'oauth', 'api_key', or None
"""
# Check OAuth first
oauth_token = self._get_oauth_token()
if oauth_token:
return 'oauth'
# Check API key
api_key = self.api_key_service.get_saved_api_key()
if api_key:
return 'api_key'
return None
def get_auth_status(self) -> Tuple[bool, str, Optional[str]]:
"""
Get detailed authentication status
Returns:
Tuple of (authenticated, method, username)
- authenticated: True if authenticated
- method: 'oauth', 'oauth_expired', 'api_key', or 'none'
- username: Username if available (OAuth only), or None
"""
# Check if OAuth token exists
if self.token_handler.has_token():
# Check if refresh token is likely expired (hasn't been refreshed in 30+ days)
token_info = self.token_handler.get_token_info()
if token_info.get('refresh_token_likely_expired'):
logger.warning("Refresh token likely expired (30+ days old), user should re-authorize")
return False, 'oauth_expired', None
# Try OAuth
oauth_token = self._get_oauth_token()
if oauth_token:
# Try to get username from userinfo
user_info = self.oauth_service.get_user_info(oauth_token)
username = user_info.get('name') if user_info else None
return True, 'oauth', username
elif self.token_handler.has_token():
# Had token but couldn't get valid access token (refresh failed)
logger.warning("OAuth token refresh failed, token may be invalid")
return False, 'oauth_expired', None
# Try API key
api_key = self.api_key_service.get_saved_api_key()
if api_key:
return True, 'api_key', None
return False, 'none', None
def authorize_oauth(self, show_browser_message_callback=None) -> bool:
"""
Perform OAuth authorization flow
Args:
show_browser_message_callback: Optional callback for browser messages
Returns:
True if authorization successful
"""
logger.info("Starting OAuth authorization")
token_data = self.oauth_service.authorize(show_browser_message_callback)
if token_data:
# Save token
success = self.token_handler.save_token({'oauth': token_data})
if success:
logger.info("OAuth authorization completed successfully")
return True
else:
logger.error("Failed to save OAuth token")
return False
else:
logger.error("OAuth authorization failed")
return False
def revoke_oauth(self) -> bool:
"""
Revoke OAuth authorization by deleting stored token
Returns:
True if revoked successfully
"""
logger.info("Revoking OAuth authorization")
return self.token_handler.delete_token()
def save_api_key(self, api_key: str) -> bool:
"""
Save API key (legacy fallback)
Args:
api_key: Nexus API key
Returns:
True if saved successfully
"""
return self.api_key_service.save_api_key(api_key)
def validate_api_key(self, api_key: Optional[str] = None) -> Tuple[bool, Optional[str]]:
"""
Validate API key against Nexus API
Args:
api_key: Optional API key to validate (uses stored if not provided)
Returns:
Tuple of (valid, username_or_error)
"""
return self.api_key_service.validate_api_key(api_key)
def ensure_valid_auth(self) -> Optional[str]:
"""
Ensure we have valid authentication, refreshing if needed
This should be called before any Nexus operation
Returns:
Valid auth token (OAuth access token or API key), or None
"""
auth_token = self.get_auth_token()
if not auth_token:
logger.warning("No authentication available for Nexus operation")
return auth_token
def get_auth_for_engine(self) -> Optional[str]:
"""
Get authentication token for jackify-engine
Same as ensure_valid_auth() - engine uses NEXUS_API_KEY env var for both OAuth and API keys
(This matches upstream Wabbajack behavior)
Returns:
Valid auth token to pass via NEXUS_API_KEY environment variable, or None
"""
return self.ensure_valid_auth()
def clear_all_auth(self) -> bool:
"""
Clear all authentication (both OAuth and API key)
Useful for testing or switching accounts
Returns:
True if any auth was cleared
"""
oauth_cleared = self.token_handler.delete_token()
api_key_cleared = self.api_key_service.clear_api_key()
if oauth_cleared or api_key_cleared:
logger.info("Cleared all Nexus authentication")
return True
else:
logger.debug("No authentication to clear")
return False

View File

@@ -0,0 +1,759 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Nexus OAuth Service
Handles OAuth 2.0 authentication flow with Nexus Mods using PKCE
"""
import os
import base64
import hashlib
import secrets
import webbrowser
import urllib.parse
from http.server import HTTPServer, BaseHTTPRequestHandler
import requests
import json
import threading
import ssl
import tempfile
import logging
import time
import subprocess
from typing import Optional, Tuple, Dict
logger = logging.getLogger(__name__)
class NexusOAuthService:
"""
Handles OAuth 2.0 authentication with Nexus Mods
Uses PKCE flow with system browser and localhost callback
"""
# OAuth Configuration
CLIENT_ID = "jackify"
AUTH_URL = "https://users.nexusmods.com/oauth/authorize"
TOKEN_URL = "https://users.nexusmods.com/oauth/token"
USERINFO_URL = "https://users.nexusmods.com/oauth/userinfo"
SCOPES = "public openid profile"
# Redirect configuration (custom protocol scheme - no SSL cert needed!)
# Requires jackify:// protocol handler to be registered with OS
REDIRECT_URI = "jackify://oauth/callback"
# Callback timeout (5 minutes)
CALLBACK_TIMEOUT = 300
def __init__(self):
"""Initialize OAuth service"""
self._auth_code = None
self._auth_state = None
self._auth_error = None
self._server_done = threading.Event()
# Ensure jackify:// protocol is registered on first use
self._ensure_protocol_registered()
def _generate_pkce_params(self) -> Tuple[str, str, str]:
"""
Generate PKCE code verifier, challenge, and state
Returns:
Tuple of (code_verifier, code_challenge, state)
"""
# Generate code verifier (43-128 characters, base64url encoded)
code_verifier = base64.urlsafe_b64encode(
os.urandom(32)
).decode('utf-8').rstrip('=')
# Generate code challenge (SHA256 hash of verifier, base64url encoded)
code_challenge = base64.urlsafe_b64encode(
hashlib.sha256(code_verifier.encode('utf-8')).digest()
).decode('utf-8').rstrip('=')
# Generate state for CSRF protection
state = secrets.token_urlsafe(32)
return code_verifier, code_challenge, state
def _ensure_protocol_registered(self) -> bool:
"""
Ensure jackify:// protocol is registered with the OS
Returns:
True if registration successful or already registered
"""
import subprocess
import sys
from pathlib import Path
if not sys.platform.startswith('linux'):
logger.debug("Protocol registration only needed on Linux")
return True
try:
# Ensure desktop file exists and has correct Exec path
desktop_file = Path.home() / ".local" / "share" / "applications" / "com.jackify.app.desktop"
# Get environment for AppImage detection
env = os.environ
# Determine executable path (DEV mode vs AppImage)
# Check multiple indicators for AppImage execution
is_appimage = (
getattr(sys, 'frozen', False) or # PyInstaller frozen
'APPIMAGE' in env or # AppImage environment variable
'APPDIR' in env or # AppImage directory variable
(sys.argv[0] and sys.argv[0].endswith('.AppImage')) # Executable name
)
if is_appimage:
# Running from AppImage - use the AppImage path directly
# CRITICAL: Never use -m flag in AppImage mode - it causes __main__.py windows
if 'APPIMAGE' in env:
# APPIMAGE env var gives us the exact path to the AppImage
exec_path = env['APPIMAGE']
logger.info(f"Using APPIMAGE env var: {exec_path}")
elif sys.argv[0] and Path(sys.argv[0]).exists():
# Use sys.argv[0] if it's a valid path
exec_path = str(Path(sys.argv[0]).resolve())
logger.info(f"Using resolved sys.argv[0]: {exec_path}")
else:
# Fallback to sys.argv[0] as-is
exec_path = sys.argv[0]
logger.warning(f"Using sys.argv[0] as fallback: {exec_path}")
else:
# Running from source (DEV mode)
# Need to ensure we run from the correct directory
src_dir = Path(__file__).parent.parent.parent.parent # Go up to src/
exec_path = f"cd {src_dir} && {sys.executable} -m jackify.frontends.gui"
logger.info(f"DEV mode exec path: {exec_path}")
logger.info(f"Source directory: {src_dir}")
# Check if desktop file needs creation or update
needs_update = False
if not desktop_file.exists():
needs_update = True
logger.info("Creating desktop file for protocol handler")
else:
# Check if Exec path matches current mode
current_content = desktop_file.read_text()
if f"Exec={exec_path} %u" not in current_content:
needs_update = True
logger.info(f"Updating desktop file with new Exec path: {exec_path}")
if needs_update:
desktop_file.parent.mkdir(parents=True, exist_ok=True)
# Build desktop file content with proper working directory
if is_appimage:
# AppImage doesn't need working directory
desktop_content = f"""[Desktop Entry]
Type=Application
Name=Jackify
Comment=Wabbajack modlist manager for Linux
Exec={exec_path} %u
Icon=com.jackify.app
Terminal=false
Categories=Game;Utility;
MimeType=x-scheme-handler/jackify;
"""
else:
# DEV mode needs working directory set to src/
# exec_path already contains the correct format: "cd {src_dir} && {sys.executable} -m jackify.frontends.gui"
src_dir = Path(__file__).parent.parent.parent.parent # Go up to src/
desktop_content = f"""[Desktop Entry]
Type=Application
Name=Jackify
Comment=Wabbajack modlist manager for Linux
Exec={exec_path} %u
Icon=com.jackify.app
Terminal=false
Categories=Game;Utility;
MimeType=x-scheme-handler/jackify;
Path={src_dir}
"""
desktop_file.write_text(desktop_content)
logger.info(f"Desktop file written: {desktop_file}")
logger.info(f"Exec path: {exec_path}")
logger.info(f"AppImage mode: {is_appimage}")
# Always ensure full registration (don't trust xdg-settings alone)
# PopOS/Ubuntu need mimeapps.list even if xdg-settings says registered
logger.info("Registering jackify:// protocol handler")
# Update MIME cache (required for Firefox dialog)
apps_dir = Path.home() / ".local" / "share" / "applications"
subprocess.run(
['update-desktop-database', str(apps_dir)],
capture_output=True,
timeout=10
)
# Set as default handler using xdg-mime (Firefox compatibility)
subprocess.run(
['xdg-mime', 'default', 'com.jackify.app.desktop', 'x-scheme-handler/jackify'],
capture_output=True,
timeout=10
)
# Also use xdg-settings as backup (some systems need both)
subprocess.run(
['xdg-settings', 'set', 'default-url-scheme-handler', 'jackify', 'com.jackify.app.desktop'],
capture_output=True,
timeout=10
)
# Manually ensure entry in mimeapps.list (PopOS/Ubuntu require this for GIO)
mimeapps_path = Path.home() / ".config" / "mimeapps.list"
try:
# Read existing content
if mimeapps_path.exists():
content = mimeapps_path.read_text()
else:
mimeapps_path.parent.mkdir(parents=True, exist_ok=True)
content = "[Default Applications]\n"
# Add jackify handler if not present
if 'x-scheme-handler/jackify=' not in content:
if '[Default Applications]' not in content:
content = "[Default Applications]\n" + content
# Insert after [Default Applications] line
lines = content.split('\n')
for i, line in enumerate(lines):
if line.strip() == '[Default Applications]':
lines.insert(i + 1, 'x-scheme-handler/jackify=com.jackify.app.desktop')
break
content = '\n'.join(lines)
mimeapps_path.write_text(content)
logger.info("Added jackify handler to mimeapps.list")
except Exception as e:
logger.warning(f"Failed to update mimeapps.list: {e}")
logger.info("jackify:// protocol registered successfully")
return True
except Exception as e:
logger.warning(f"Failed to register jackify:// protocol: {e}")
return False
def _generate_self_signed_cert(self) -> Tuple[Optional[str], Optional[str]]:
"""
Generate self-signed certificate for HTTPS localhost
Returns:
Tuple of (cert_file_path, key_file_path) or (None, None) on failure
"""
try:
from cryptography import x509
from cryptography.x509.oid import NameOID
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.primitives import serialization
import datetime
import ipaddress
logger.info("Generating self-signed certificate for OAuth callback")
# Generate private key
private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
)
# Create certificate
subject = issuer = x509.Name([
x509.NameAttribute(NameOID.COUNTRY_NAME, "US"),
x509.NameAttribute(NameOID.ORGANIZATION_NAME, "Jackify"),
x509.NameAttribute(NameOID.COMMON_NAME, self.REDIRECT_HOST),
])
cert = x509.CertificateBuilder().subject_name(
subject
).issuer_name(
issuer
).public_key(
private_key.public_key()
).serial_number(
x509.random_serial_number()
).not_valid_before(
datetime.datetime.now(datetime.UTC)
).not_valid_after(
datetime.datetime.now(datetime.UTC) + datetime.timedelta(days=365)
).add_extension(
x509.SubjectAlternativeName([
x509.IPAddress(ipaddress.IPv4Address(self.REDIRECT_HOST)),
]),
critical=False,
).sign(private_key, hashes.SHA256())
# Save to temp files
temp_dir = tempfile.mkdtemp()
cert_file = os.path.join(temp_dir, "oauth_cert.pem")
key_file = os.path.join(temp_dir, "oauth_key.pem")
with open(cert_file, "wb") as f:
f.write(cert.public_bytes(serialization.Encoding.PEM))
with open(key_file, "wb") as f:
f.write(private_key.private_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PrivateFormat.TraditionalOpenSSL,
encryption_algorithm=serialization.NoEncryption()
))
return cert_file, key_file
except ImportError:
logger.error("cryptography package not installed - required for OAuth")
return None, None
except Exception as e:
logger.error(f"Failed to generate SSL certificate: {e}")
return None, None
def _build_authorization_url(self, code_challenge: str, state: str) -> str:
"""
Build OAuth authorization URL
Args:
code_challenge: PKCE code challenge
state: CSRF protection state
Returns:
Authorization URL
"""
params = {
'response_type': 'code',
'client_id': self.CLIENT_ID,
'redirect_uri': self.REDIRECT_URI,
'scope': self.SCOPES,
'code_challenge': code_challenge,
'code_challenge_method': 'S256',
'state': state
}
return f"{self.AUTH_URL}?{urllib.parse.urlencode(params)}"
def _create_callback_handler(self):
"""Create HTTP request handler class for OAuth callback"""
service = self
class OAuthCallbackHandler(BaseHTTPRequestHandler):
"""HTTP request handler for OAuth callback"""
def log_message(self, format, *args):
"""Log OAuth callback requests"""
logger.debug(f"OAuth callback: {format % args}")
def do_GET(self):
"""Handle GET request from OAuth redirect"""
logger.info(f"OAuth callback received: {self.path}")
# Parse query parameters
parsed = urllib.parse.urlparse(self.path)
params = urllib.parse.parse_qs(parsed.query)
# Ignore favicon and other non-OAuth requests
if parsed.path == '/favicon.ico':
self.send_response(404)
self.end_headers()
return
if 'code' in params:
service._auth_code = params['code'][0]
service._auth_state = params.get('state', [None])[0]
logger.info(f"OAuth authorization code received: {service._auth_code[:10]}...")
# Send success response
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
html = """
<html>
<head><title>Authorization Successful</title></head>
<body style="font-family: Arial, sans-serif; text-align: center; padding: 50px;">
<h1>Authorization Successful!</h1>
<p>You can close this window and return to Jackify.</p>
<script>setTimeout(function() { window.close(); }, 3000);</script>
</body>
</html>
"""
self.wfile.write(html.encode())
elif 'error' in params:
service._auth_error = params['error'][0]
error_desc = params.get('error_description', ['Unknown error'])[0]
# Send error response
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
html = f"""
<html>
<head><title>Authorization Failed</title></head>
<body style="font-family: Arial, sans-serif; text-align: center; padding: 50px;">
<h1>Authorization Failed</h1>
<p>Error: {service._auth_error}</p>
<p>{error_desc}</p>
<p>You can close this window and try again in Jackify.</p>
</body>
</html>
"""
self.wfile.write(html.encode())
else:
# Unexpected callback format
logger.warning(f"OAuth callback with no code or error: {params}")
self.send_response(400)
self.send_header('Content-type', 'text/html')
self.end_headers()
html = """
<html>
<head><title>Invalid Request</title></head>
<body style="font-family: Arial, sans-serif; text-align: center; padding: 50px;">
<h1>Invalid OAuth Callback</h1>
<p>You can close this window.</p>
</body>
</html>
"""
self.wfile.write(html.encode())
# Signal server to shut down
service._server_done.set()
logger.debug("OAuth callback handler signaled server to shut down")
return OAuthCallbackHandler
def _wait_for_callback(self) -> bool:
"""
Wait for OAuth callback via jackify:// protocol handler
Returns:
True if callback received, False on timeout
"""
from pathlib import Path
import time
callback_file = Path.home() / ".config" / "jackify" / "oauth_callback.tmp"
# Delete any old callback file
if callback_file.exists():
callback_file.unlink()
logger.info("Waiting for OAuth callback via jackify:// protocol")
# Poll for callback file with periodic user feedback
start_time = time.time()
last_reminder = 0
while (time.time() - start_time) < self.CALLBACK_TIMEOUT:
if callback_file.exists():
try:
# Read callback data
lines = callback_file.read_text().strip().split('\n')
if len(lines) >= 2:
self._auth_code = lines[0]
self._auth_state = lines[1]
logger.info(f"OAuth callback received: code={self._auth_code[:10]}...")
# Clean up
callback_file.unlink()
return True
except Exception as e:
logger.error(f"Failed to read callback file: {e}")
return False
# Show periodic reminder about protocol handler
elapsed = time.time() - start_time
if elapsed - last_reminder > 30: # Every 30 seconds
logger.info(f"Still waiting for OAuth callback... ({int(elapsed)}s elapsed)")
if elapsed > 60:
logger.warning(
"If you see a blank browser tab or popup blocker, "
"check for browser notifications asking to 'Open Jackify'"
)
last_reminder = elapsed
time.sleep(0.5) # Poll every 500ms
logger.error(f"OAuth callback timeout after {self.CALLBACK_TIMEOUT} seconds")
logger.error(
"Protocol handler may not be working. Check:\n"
" 1. Browser asked 'Open Jackify?' and you clicked Allow\n"
" 2. No popup blocker notifications\n"
" 3. Desktop file exists: ~/.local/share/applications/com.jackify.app.desktop"
)
return False
def _send_desktop_notification(self, title: str, message: str):
"""
Send desktop notification if available
Args:
title: Notification title
message: Notification message
"""
try:
# Try notify-send (Linux)
subprocess.run(
['notify-send', title, message],
check=False,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
timeout=2
)
except (FileNotFoundError, subprocess.TimeoutExpired):
pass
def _exchange_code_for_token(
self,
auth_code: str,
code_verifier: str
) -> Optional[Dict]:
"""
Exchange authorization code for access token
Args:
auth_code: Authorization code from callback
code_verifier: PKCE code verifier
Returns:
Token response dict or None on failure
"""
data = {
'grant_type': 'authorization_code',
'client_id': self.CLIENT_ID,
'redirect_uri': self.REDIRECT_URI,
'code': auth_code,
'code_verifier': code_verifier
}
try:
response = requests.post(self.TOKEN_URL, data=data, timeout=10)
if response.status_code == 200:
token_data = response.json()
logger.info("Successfully exchanged authorization code for token")
return token_data
else:
logger.error(f"Token exchange failed: {response.status_code} - {response.text}")
return None
except requests.RequestException as e:
logger.error(f"Token exchange request failed: {e}")
return None
def refresh_token(self, refresh_token: str) -> Optional[Dict]:
"""
Refresh an access token using refresh token
Args:
refresh_token: Refresh token from previous authentication
Returns:
New token response dict or None on failure
"""
data = {
'grant_type': 'refresh_token',
'client_id': self.CLIENT_ID,
'refresh_token': refresh_token
}
try:
response = requests.post(self.TOKEN_URL, data=data, timeout=10)
if response.status_code == 200:
token_data = response.json()
logger.info("Successfully refreshed access token")
return token_data
else:
logger.error(f"Token refresh failed: {response.status_code} - {response.text}")
return None
except requests.RequestException as e:
logger.error(f"Token refresh request failed: {e}")
return None
def get_user_info(self, access_token: str) -> Optional[Dict]:
"""
Get user information using access token
Args:
access_token: OAuth access token
Returns:
User info dict or None on failure
"""
headers = {
'Authorization': f'Bearer {access_token}'
}
try:
response = requests.get(self.USERINFO_URL, headers=headers, timeout=10)
if response.status_code == 200:
user_info = response.json()
logger.info(f"Retrieved user info for: {user_info.get('name', 'unknown')}")
return user_info
else:
logger.error(f"User info request failed: {response.status_code}")
return None
except requests.RequestException as e:
logger.error(f"User info request failed: {e}")
return None
def authorize(self, show_browser_message_callback=None) -> Optional[Dict]:
"""
Perform full OAuth authorization flow
Args:
show_browser_message_callback: Optional callback to display message about browser opening
Returns:
Token response dict or None on failure
"""
logger.info("Starting Nexus OAuth authorization flow")
# Reset state
self._auth_code = None
self._auth_state = None
self._auth_error = None
self._server_done.clear()
# Generate PKCE parameters
code_verifier, code_challenge, state = self._generate_pkce_params()
logger.debug(f"Generated PKCE parameters (state: {state[:10]}...)")
# Build authorization URL
auth_url = self._build_authorization_url(code_challenge, state)
# Open browser
logger.info("Opening browser for authorisation")
try:
# When running from AppImage, we need to clean the environment to avoid
# library conflicts with system tools (xdg-open, kde-open, etc.)
import os
import subprocess
env = os.environ.copy()
# Remove AppImage-specific environment variables that can cause conflicts
# These variables inject AppImage's bundled libraries into child processes
appimage_vars = [
'LD_LIBRARY_PATH',
'PYTHONPATH',
'PYTHONHOME',
'QT_PLUGIN_PATH',
'QML2_IMPORT_PATH',
]
# Check if we're running from AppImage
if 'APPIMAGE' in env or 'APPDIR' in env:
logger.debug("Running from AppImage - cleaning environment for browser launch")
for var in appimage_vars:
if var in env:
del env[var]
logger.debug(f"Removed {var} from browser environment")
# Use Popen instead of run to avoid waiting for browser to close
# xdg-open may not return until the browser closes, which could be never
try:
process = subprocess.Popen(
['xdg-open', auth_url],
env=env,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
start_new_session=True # Detach from parent process
)
# Give it a moment to fail if it's going to fail
import time
time.sleep(0.5)
# Check if process is still running or has exited successfully
poll_result = process.poll()
if poll_result is None:
# Process still running - browser is opening/open
logger.info("Browser opened successfully via xdg-open (process running)")
browser_opened = True
elif poll_result == 0:
# Process exited successfully
logger.info("Browser opened successfully via xdg-open (exit code 0)")
browser_opened = True
else:
# Process exited with error
logger.warning(f"xdg-open exited with code {poll_result}, trying webbrowser module")
if webbrowser.open(auth_url):
logger.info("Browser opened successfully via webbrowser module")
browser_opened = True
else:
logger.warning("webbrowser.open returned False")
browser_opened = False
except FileNotFoundError:
# xdg-open not found - try webbrowser module
logger.warning("xdg-open not found, trying webbrowser module")
if webbrowser.open(auth_url):
logger.info("Browser opened successfully via webbrowser module")
browser_opened = True
else:
logger.warning("webbrowser.open returned False")
browser_opened = False
except Exception as e:
logger.error(f"Error opening browser: {e}")
browser_opened = False
# Send desktop notification
self._send_desktop_notification(
"Jackify - Nexus Authorisation",
"Please check your browser to authorise Jackify"
)
# Show message via callback if provided (AFTER browser opens)
if show_browser_message_callback:
if browser_opened:
show_browser_message_callback(
"Browser opened for Nexus authorisation.\n\n"
"After clicking 'Authorize', your browser may ask to\n"
"open Jackify or show a popup blocker notification.\n\n"
"Please click 'Open' or 'Allow' to complete authorization."
)
else:
show_browser_message_callback(
f"Could not open browser automatically.\n\n"
f"Please open this URL manually:\n{auth_url}"
)
# Wait for callback via jackify:// protocol
if not self._wait_for_callback():
return None
# Check for errors
if self._auth_error:
logger.error(f"Authorization failed: {self._auth_error}")
return None
if not self._auth_code:
logger.error("No authorization code received")
return None
# Verify state matches
if self._auth_state != state:
logger.error("State mismatch - possible CSRF attack")
return None
logger.info("Authorization code received, exchanging for token")
# Exchange code for token
token_data = self._exchange_code_for_token(self._auth_code, code_verifier)
if token_data:
logger.info("OAuth authorization flow completed successfully")
else:
logger.error("Failed to exchange authorization code for token")
return token_data

View File

@@ -6,8 +6,11 @@ Centralized service for detecting and managing protontricks installation across
""" """
import logging import logging
import os
import shutil import shutil
import subprocess import subprocess
import sys
import importlib.util
from typing import Optional, Tuple from typing import Optional, Tuple
from ..handlers.protontricks_handler import ProtontricksHandler from ..handlers.protontricks_handler import ProtontricksHandler
from ..handlers.config_handler import ConfigHandler from ..handlers.config_handler import ConfigHandler
@@ -44,7 +47,7 @@ class ProtontricksDetectionService:
def detect_protontricks(self, use_cache: bool = True) -> Tuple[bool, str, str]: def detect_protontricks(self, use_cache: bool = True) -> Tuple[bool, str, str]:
""" """
Detect if protontricks is installed and get installation details Detect if system protontricks is installed and get installation details
Args: Args:
use_cache (bool): Whether to use cached detection result use_cache (bool): Whether to use cached detection result
@@ -82,7 +85,7 @@ class ProtontricksDetectionService:
details_message = "Protontricks is installed (unknown type)" details_message = "Protontricks is installed (unknown type)"
else: else:
installation_type = 'none' installation_type = 'none'
details_message = "Protontricks not found - required for Jackify functionality" details_message = "Protontricks not found - install via flatpak or package manager"
# Cache the result # Cache the result
self._last_detection_result = (is_installed, installation_type, details_message) self._last_detection_result = (is_installed, installation_type, details_message)
@@ -93,55 +96,22 @@ class ProtontricksDetectionService:
def _detect_without_prompts(self, handler: ProtontricksHandler) -> bool: def _detect_without_prompts(self, handler: ProtontricksHandler) -> bool:
""" """
Detect protontricks without user prompts or installation attempts Detect system protontricks (flatpak or native) without user prompts.
Args: Args:
handler (ProtontricksHandler): Handler instance to use handler (ProtontricksHandler): Handler instance to use
Returns: Returns:
bool: True if protontricks is found bool: True if system protontricks is found
""" """
import shutil # Use the handler's silent detection method
return handler.detect_protontricks()
# Check if protontricks exists as a command
protontricks_path_which = shutil.which("protontricks")
if protontricks_path_which:
# Check if it's a flatpak wrapper
try:
with open(protontricks_path_which, 'r') as f:
content = f.read()
if "flatpak run" in content:
logger.debug(f"Detected Protontricks is a Flatpak wrapper at {protontricks_path_which}")
handler.which_protontricks = 'flatpak'
# Continue to check flatpak list just to be sure
else:
logger.info(f"Native Protontricks found at {protontricks_path_which}")
handler.which_protontricks = 'native'
handler.protontricks_path = protontricks_path_which
return True
except Exception as e:
logger.error(f"Error reading protontricks executable: {e}")
# Check if flatpak protontricks is installed
try:
env = handler._get_clean_subprocess_env()
result = subprocess.run(
["flatpak", "list"],
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL, # Suppress stderr to avoid error messages
text=True,
env=env
)
if result.returncode == 0 and "com.github.Matoking.protontricks" in result.stdout:
logger.info("Flatpak Protontricks is installed")
handler.which_protontricks = 'flatpak'
return True
except FileNotFoundError:
logger.warning("'flatpak' command not found. Cannot check for Flatpak Protontricks.")
except Exception as e:
logger.error(f"Unexpected error checking flatpak: {e}")
def is_bundled_mode(self) -> bool:
"""
DEPRECATED: Bundled protontricks no longer supported.
Always returns False for backwards compatibility.
"""
return False return False
def install_flatpak_protontricks(self) -> Tuple[bool, str]: def install_flatpak_protontricks(self) -> Tuple[bool, str]:

View File

@@ -10,42 +10,82 @@ from typing import Callable, Optional
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
STRATEGY_JACKIFY = "jackify"
STRATEGY_NAK_SIMPLE = "nak_simple"
def _get_restart_strategy() -> str:
"""Read restart strategy from config with safe fallback."""
try:
from jackify.backend.handlers.config_handler import ConfigHandler
strategy = ConfigHandler().get("steam_restart_strategy", STRATEGY_JACKIFY)
if strategy not in (STRATEGY_JACKIFY, STRATEGY_NAK_SIMPLE):
return STRATEGY_JACKIFY
return strategy
except Exception as exc: # pragma: no cover - defensive logging only
logger.debug(f"Steam restart: Unable to read strategy from config: {exc}")
return STRATEGY_JACKIFY
def _strategy_label(strategy: str) -> str:
if strategy == STRATEGY_NAK_SIMPLE:
return "NaK simple restart"
return "Jackify hardened restart"
def _get_clean_subprocess_env(): def _get_clean_subprocess_env():
""" """
Create a clean environment for subprocess calls by removing PyInstaller-specific Create a clean environment for subprocess calls by stripping bundle-specific
environment variables that can interfere with Steam execution. environment variables (e.g., frozen AppImage remnants) that can interfere with Steam.
CRITICAL: Preserves all display/session variables that Steam needs for GUI:
- DISPLAY, WAYLAND_DISPLAY, XDG_SESSION_TYPE, DBUS_SESSION_BUS_ADDRESS,
XDG_RUNTIME_DIR, XAUTHORITY, etc.
Returns: Returns:
dict: Cleaned environment dictionary dict: Cleaned environment dictionary with GUI variables preserved
""" """
env = os.environ.copy() env = os.environ.copy()
pyinstaller_vars_removed = [] bundle_vars_removed = []
# Remove PyInstaller-specific environment variables # CRITICAL: Preserve display/session variables that Steam GUI needs
# These MUST be kept for Steam to open its GUI window
gui_vars_to_preserve = [
'DISPLAY', 'WAYLAND_DISPLAY', 'XDG_SESSION_TYPE', 'DBUS_SESSION_BUS_ADDRESS',
'XDG_RUNTIME_DIR', 'XAUTHORITY', 'XDG_CURRENT_DESKTOP', 'XDG_SESSION_DESKTOP',
'QT_QPA_PLATFORM', 'GDK_BACKEND', 'XDG_DATA_DIRS', 'XDG_CONFIG_DIRS'
]
preserved_gui_vars = {}
for var in gui_vars_to_preserve:
if var in env:
preserved_gui_vars[var] = env[var]
logger.debug(f"Steam restart: Preserving GUI variable {var}={env[var][:50] if len(str(env[var])) > 50 else env[var]}")
# Remove bundle-specific environment variables
if env.pop('_MEIPASS', None): if env.pop('_MEIPASS', None):
pyinstaller_vars_removed.append('_MEIPASS') bundle_vars_removed.append('_MEIPASS')
if env.pop('_MEIPASS2', None): if env.pop('_MEIPASS2', None):
pyinstaller_vars_removed.append('_MEIPASS2') bundle_vars_removed.append('_MEIPASS2')
# Clean library path variables that PyInstaller modifies (Linux/Unix) # Clean library path variables that frozen bundles modify (Linux/Unix)
if 'LD_LIBRARY_PATH_ORIG' in env: if 'LD_LIBRARY_PATH_ORIG' in env:
# Restore original LD_LIBRARY_PATH if it was backed up by PyInstaller # Restore original LD_LIBRARY_PATH if it was backed up by the bundler
env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH_ORIG'] env['LD_LIBRARY_PATH'] = env['LD_LIBRARY_PATH_ORIG']
pyinstaller_vars_removed.append('LD_LIBRARY_PATH (restored from _ORIG)') bundle_vars_removed.append('LD_LIBRARY_PATH (restored from _ORIG)')
else: else:
# Remove PyInstaller-modified LD_LIBRARY_PATH # Remove modified LD_LIBRARY_PATH entries
if env.pop('LD_LIBRARY_PATH', None): if env.pop('LD_LIBRARY_PATH', None):
pyinstaller_vars_removed.append('LD_LIBRARY_PATH (removed)') bundle_vars_removed.append('LD_LIBRARY_PATH (removed)')
# Clean PATH of PyInstaller-specific entries # Clean PATH of bundle-specific entries
if 'PATH' in env and hasattr(sys, '_MEIPASS'): if 'PATH' in env and hasattr(sys, '_MEIPASS'):
path_entries = env['PATH'].split(os.pathsep) path_entries = env['PATH'].split(os.pathsep)
original_count = len(path_entries) original_count = len(path_entries)
# Remove any PATH entries that point to PyInstaller temp directory # Remove any PATH entries that point to the bundle's temp directory
cleaned_path = [p for p in path_entries if not p.startswith(sys._MEIPASS)] cleaned_path = [p for p in path_entries if not p.startswith(sys._MEIPASS)]
env['PATH'] = os.pathsep.join(cleaned_path) env['PATH'] = os.pathsep.join(cleaned_path)
if len(cleaned_path) < original_count: if len(cleaned_path) < original_count:
pyinstaller_vars_removed.append(f'PATH (removed {original_count - len(cleaned_path)} PyInstaller entries)') bundle_vars_removed.append(f'PATH (removed {original_count - len(cleaned_path)} bundle entries)')
# Clean macOS library path (if present) # Clean macOS library path (if present)
if 'DYLD_LIBRARY_PATH' in env and hasattr(sys, '_MEIPASS'): if 'DYLD_LIBRARY_PATH' in env and hasattr(sys, '_MEIPASS'):
@@ -53,16 +93,26 @@ def _get_clean_subprocess_env():
cleaned_dyld = [p for p in dyld_entries if not p.startswith(sys._MEIPASS)] cleaned_dyld = [p for p in dyld_entries if not p.startswith(sys._MEIPASS)]
if cleaned_dyld: if cleaned_dyld:
env['DYLD_LIBRARY_PATH'] = os.pathsep.join(cleaned_dyld) env['DYLD_LIBRARY_PATH'] = os.pathsep.join(cleaned_dyld)
pyinstaller_vars_removed.append('DYLD_LIBRARY_PATH (cleaned)') bundle_vars_removed.append('DYLD_LIBRARY_PATH (cleaned)')
else: else:
env.pop('DYLD_LIBRARY_PATH', None) env.pop('DYLD_LIBRARY_PATH', None)
pyinstaller_vars_removed.append('DYLD_LIBRARY_PATH (removed)') bundle_vars_removed.append('DYLD_LIBRARY_PATH (removed)')
# Ensure GUI variables are still present (they should be, but double-check)
for var, value in preserved_gui_vars.items():
if var not in env:
env[var] = value
logger.warning(f"Steam restart: Restored GUI variable {var} that was accidentally removed")
# Log what was cleaned for debugging # Log what was cleaned for debugging
if pyinstaller_vars_removed: if bundle_vars_removed:
logger.debug(f"Steam restart: Cleaned PyInstaller environment variables: {', '.join(pyinstaller_vars_removed)}") logger.debug(f"Steam restart: Cleaned bundled environment variables: {', '.join(bundle_vars_removed)}")
else: else:
logger.debug("Steam restart: No PyInstaller environment variables detected (likely DEV mode)") logger.debug("Steam restart: No bundled environment variables detected (likely DEV mode)")
# Log preserved GUI variables for debugging
if preserved_gui_vars:
logger.debug(f"Steam restart: Preserved {len(preserved_gui_vars)} GUI environment variables")
return env return env
@@ -138,22 +188,99 @@ def wait_for_steam_exit(timeout: int = 60, check_interval: float = 0.5) -> bool:
time.sleep(check_interval) time.sleep(check_interval)
return False return False
def start_steam() -> bool: def _start_steam_nak_style(is_steamdeck_flag=False, is_flatpak_flag=False, env_override=None) -> bool:
"""Attempt to start Steam using the exact methods from existing working logic.""" """
env = _get_clean_subprocess_env() Start Steam using a simplified NaK-style restart (single command, no env cleanup).
CRITICAL: Do NOT use start_new_session - Steam needs to inherit the session
to connect to display/tray. Ensure all GUI environment variables are preserved.
"""
env = env_override if env_override is not None else os.environ.copy()
# Log critical GUI variables for debugging
gui_vars = ['DISPLAY', 'WAYLAND_DISPLAY', 'XDG_SESSION_TYPE', 'DBUS_SESSION_BUS_ADDRESS', 'XDG_RUNTIME_DIR']
for var in gui_vars:
if var in env:
logger.debug(f"NaK-style restart: {var}={env[var][:50] if len(str(env[var])) > 50 else env[var]}")
else:
logger.warning(f"NaK-style restart: {var} is NOT SET - Steam GUI may fail!")
try:
if is_steamdeck_flag:
logger.info("NaK-style restart: Steam Deck detected, restarting via systemctl.")
subprocess.Popen(["systemctl", "--user", "restart", "app-steam@autostart.service"], env=env)
elif is_flatpak_flag:
logger.info("NaK-style restart: Flatpak Steam detected, running flatpak command.")
subprocess.Popen(["flatpak", "run", "com.valvesoftware.Steam"],
env=env, stderr=subprocess.DEVNULL)
else:
logger.info("NaK-style restart: launching Steam directly (inheriting session for GUI).")
# NaK uses simple "steam" command without -foreground flag
# Do NOT use start_new_session - Steam needs session access for GUI
# Use shell=True to ensure proper environment inheritance
# This helps with GUI display access on some systems
subprocess.Popen("steam", shell=True, env=env)
time.sleep(5)
check_result = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=env)
if check_result.returncode == 0:
logger.info("NaK-style restart detected running Steam process.")
return True
logger.warning("NaK-style restart did not detect Steam process after launch.")
return False
except FileNotFoundError as exc:
logger.error(f"NaK-style restart command not found: {exc}")
return False
except Exception as exc:
logger.error(f"NaK-style restart encountered an error: {exc}")
return False
def start_steam(is_steamdeck_flag=None, is_flatpak_flag=None, env_override=None, strategy: str = STRATEGY_JACKIFY) -> bool:
"""
Attempt to start Steam using the exact methods from existing working logic.
Args:
is_steamdeck_flag: Optional pre-detected Steam Deck status
is_flatpak_flag: Optional pre-detected Flatpak Steam status
env_override: Optional environment dictionary for subprocess calls
strategy: Restart strategy identifier
"""
if strategy == STRATEGY_NAK_SIMPLE:
return _start_steam_nak_style(
is_steamdeck_flag=is_steamdeck_flag,
is_flatpak_flag=is_flatpak_flag,
env_override=env_override or os.environ.copy(),
)
env = env_override if env_override is not None else _get_clean_subprocess_env()
# Use provided flags or detect
_is_steam_deck = is_steamdeck_flag if is_steamdeck_flag is not None else is_steam_deck()
_is_flatpak = is_flatpak_flag if is_flatpak_flag is not None else is_flatpak_steam()
logger.info(
"Starting Steam (strategy=%s, steam_deck=%s, flatpak=%s)",
strategy,
_is_steam_deck,
_is_flatpak,
)
try: try:
# Try systemd user service (Steam Deck) - HIGHEST PRIORITY # Try systemd user service (Steam Deck) - HIGHEST PRIORITY
if is_steam_deck(): if _is_steam_deck:
logger.debug("Using systemctl restart for Steam Deck.")
subprocess.Popen(["systemctl", "--user", "restart", "app-steam@autostart.service"], env=env) subprocess.Popen(["systemctl", "--user", "restart", "app-steam@autostart.service"], env=env)
return True return True
# Check if Flatpak Steam (only if not Steam Deck) # Check if Flatpak Steam (only if not Steam Deck)
if is_flatpak_steam(): if _is_flatpak:
logger.info("Flatpak Steam detected - using flatpak run command") logger.info("Flatpak Steam detected - using flatpak run command")
try: try:
# Redirect flatpak's stderr to suppress "app not installed" errors on systems without flatpak Steam # Use -foreground to ensure GUI opens (not -silent)
# Steam's own stdout/stderr will still go through (flatpak forwards them) # CRITICAL: Do NOT use start_new_session - Steam needs to inherit the session
subprocess.Popen(["flatpak", "run", "com.valvesoftware.Steam", "-silent"], logger.debug("Executing: flatpak run com.valvesoftware.Steam -foreground (inheriting session for GUI)")
subprocess.Popen(["flatpak", "run", "com.valvesoftware.Steam", "-foreground"],
env=env, stderr=subprocess.DEVNULL) env=env, stderr=subprocess.DEVNULL)
time.sleep(5) time.sleep(5)
check_result = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=env) check_result = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=env)
@@ -161,18 +288,15 @@ def start_steam() -> bool:
logger.info("Flatpak Steam process detected after start.") logger.info("Flatpak Steam process detected after start.")
return True return True
else: else:
logger.warning("Flatpak Steam process not detected after start attempt.") logger.warning("Flatpak Steam start failed, falling back to normal Steam start methods")
return False
except Exception as e: except Exception as e:
logger.error(f"Error starting Flatpak Steam: {e}") logger.warning(f"Flatpak Steam start failed ({e}), falling back to normal Steam start methods")
return False
# Use startup methods with only -silent flag (no -minimized or -no-browser) # Use startup methods with -foreground flag to ensure GUI opens
# Don't redirect stdout/stderr or use start_new_session to allow Steam to connect to display/tray
start_methods = [ start_methods = [
{"name": "Popen", "cmd": ["steam", "-silent"], "kwargs": {"env": env}}, {"name": "Popen", "cmd": ["steam", "-foreground"], "kwargs": {"stdout": subprocess.DEVNULL, "stderr": subprocess.DEVNULL, "stdin": subprocess.DEVNULL, "start_new_session": True, "env": env}},
{"name": "setsid", "cmd": ["setsid", "steam", "-silent"], "kwargs": {"env": env}}, {"name": "setsid", "cmd": ["setsid", "steam", "-foreground"], "kwargs": {"stdout": subprocess.DEVNULL, "stderr": subprocess.DEVNULL, "stdin": subprocess.DEVNULL, "env": env}},
{"name": "nohup", "cmd": ["nohup", "steam", "-silent"], "kwargs": {"preexec_fn": os.setpgrp, "env": env}} {"name": "nohup", "cmd": ["nohup", "steam", "-foreground"], "kwargs": {"stdout": subprocess.DEVNULL, "stderr": subprocess.DEVNULL, "stdin": subprocess.DEVNULL, "start_new_session": True, "preexec_fn": os.setpgrp, "env": env}}
] ]
for method in start_methods: for method in start_methods:
@@ -201,13 +325,24 @@ def start_steam() -> bool:
logger.error(f"Error starting Steam: {e}") logger.error(f"Error starting Steam: {e}")
return False return False
def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = None, timeout: int = 60) -> bool: def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = None, timeout: int = 60, system_info=None) -> bool:
""" """
Robustly restart Steam across all distros. Returns True on success, False on failure. Robustly restart Steam across all distros. Returns True on success, False on failure.
Optionally accepts a progress_callback(message: str) for UI feedback. Optionally accepts a progress_callback(message: str) for UI feedback.
Uses aggressive pkill approach for maximum reliability. Uses aggressive pkill approach for maximum reliability.
Args:
progress_callback: Optional callback for progress updates
timeout: Timeout in seconds for restart operation
system_info: Optional SystemInfo object with pre-detected Steam installation types
""" """
env = _get_clean_subprocess_env() shutdown_env = _get_clean_subprocess_env()
strategy = _get_restart_strategy()
start_env = shutdown_env if strategy == STRATEGY_JACKIFY else os.environ.copy()
# Use cached detection from system_info if available, otherwise detect
_is_steam_deck = system_info.is_steamdeck if system_info else is_steam_deck()
_is_flatpak = system_info.is_flatpak_steam if system_info else is_flatpak_steam()
def report(msg): def report(msg):
logger.info(msg) logger.info(msg)
@@ -215,22 +350,23 @@ def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = No
progress_callback(msg) progress_callback(msg)
report("Shutting down Steam...") report("Shutting down Steam...")
report(f"Steam restart strategy: {_strategy_label(strategy)}")
# Steam Deck: Use systemctl for shutdown (special handling) - HIGHEST PRIORITY # Steam Deck: Use systemctl for shutdown (special handling) - HIGHEST PRIORITY
if is_steam_deck(): if _is_steam_deck:
try: try:
report("Steam Deck detected - using systemctl shutdown...") report("Steam Deck detected - using systemctl shutdown...")
subprocess.run(['systemctl', '--user', 'stop', 'app-steam@autostart.service'], subprocess.run(['systemctl', '--user', 'stop', 'app-steam@autostart.service'],
timeout=15, check=False, capture_output=True, env=env) timeout=15, check=False, capture_output=True, env=shutdown_env)
time.sleep(2) time.sleep(2)
except Exception as e: except Exception as e:
logger.debug(f"systemctl stop failed on Steam Deck: {e}") logger.debug(f"systemctl stop failed on Steam Deck: {e}")
# Flatpak Steam: Use flatpak kill command (only if not Steam Deck) # Flatpak Steam: Use flatpak kill command (only if not Steam Deck)
elif is_flatpak_steam(): elif _is_flatpak:
try: try:
report("Flatpak Steam detected - stopping via flatpak...") report("Flatpak Steam detected - stopping via flatpak...")
subprocess.run(['flatpak', 'kill', 'com.valvesoftware.Steam'], subprocess.run(['flatpak', 'kill', 'com.valvesoftware.Steam'],
timeout=15, check=False, capture_output=True, stderr=subprocess.DEVNULL, env=env) timeout=15, check=False, capture_output=True, stderr=subprocess.DEVNULL, env=shutdown_env)
time.sleep(2) time.sleep(2)
except Exception as e: except Exception as e:
logger.debug(f"flatpak kill failed: {e}") logger.debug(f"flatpak kill failed: {e}")
@@ -238,21 +374,21 @@ def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = No
# All systems: Use pkill approach (proven 15/16 test success rate) # All systems: Use pkill approach (proven 15/16 test success rate)
try: try:
# Skip unreliable steam -shutdown, go straight to pkill # Skip unreliable steam -shutdown, go straight to pkill
pkill_result = subprocess.run(['pkill', 'steam'], timeout=15, check=False, capture_output=True, env=env) pkill_result = subprocess.run(['pkill', 'steam'], timeout=15, check=False, capture_output=True, env=shutdown_env)
logger.debug(f"pkill steam result: {pkill_result.returncode}") logger.debug(f"pkill steam result: {pkill_result.returncode}")
time.sleep(2) time.sleep(2)
# Check if Steam is still running # Check if Steam is still running
check_result = subprocess.run(['pgrep', '-f', 'steamwebhelper'], capture_output=True, timeout=10, env=env) check_result = subprocess.run(['pgrep', '-f', 'steamwebhelper'], capture_output=True, timeout=10, env=shutdown_env)
if check_result.returncode == 0: if check_result.returncode == 0:
# Force kill if still running # Force kill if still running
report("Steam still running - force terminating...") report("Steam still running - force terminating...")
force_result = subprocess.run(['pkill', '-9', 'steam'], timeout=15, check=False, capture_output=True, env=env) force_result = subprocess.run(['pkill', '-9', 'steam'], timeout=15, check=False, capture_output=True, env=shutdown_env)
logger.debug(f"pkill -9 steam result: {force_result.returncode}") logger.debug(f"pkill -9 steam result: {force_result.returncode}")
time.sleep(2) time.sleep(2)
# Final check # Final check
final_check = subprocess.run(['pgrep', '-f', 'steamwebhelper'], capture_output=True, timeout=10, env=env) final_check = subprocess.run(['pgrep', '-f', 'steamwebhelper'], capture_output=True, timeout=10, env=shutdown_env)
if final_check.returncode != 0: if final_check.returncode != 0:
logger.info("Steam processes successfully force terminated.") logger.info("Steam processes successfully force terminated.")
else: else:
@@ -271,17 +407,22 @@ def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = No
report("Starting Steam...") report("Starting Steam...")
# Steam Deck: Use systemctl restart (keep existing working approach) # Steam Deck: Use systemctl restart (keep existing working approach)
if is_steam_deck(): if _is_steam_deck:
try: try:
subprocess.Popen(["systemctl", "--user", "restart", "app-steam@autostart.service"], env=env) subprocess.Popen(["systemctl", "--user", "restart", "app-steam@autostart.service"], env=start_env)
logger.info("Steam Deck: Initiated systemctl restart") logger.info("Steam Deck: Initiated systemctl restart")
except Exception as e: except Exception as e:
logger.error(f"Steam Deck systemctl restart failed: {e}") logger.error(f"Steam Deck systemctl restart failed: {e}")
report("Failed to restart Steam on Steam Deck.") report("Failed to restart Steam on Steam Deck.")
return False return False
else: else:
# All other distros: Use proven steam -silent method # All other distros: Use start_steam() which now uses -foreground to ensure GUI opens
if not start_steam(): if not start_steam(
is_steamdeck_flag=_is_steam_deck,
is_flatpak_flag=_is_flatpak,
env_override=start_env,
strategy=strategy,
):
report("Failed to start Steam.") report("Failed to start Steam.")
return False return False
@@ -294,7 +435,7 @@ def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = No
while elapsed_wait < max_startup_wait: while elapsed_wait < max_startup_wait:
try: try:
result = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=env) result = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=start_env)
if result.returncode == 0: if result.returncode == 0:
if not initial_wait_done: if not initial_wait_done:
logger.info("Steam process detected. Waiting additional time for full initialization...") logger.info("Steam process detected. Waiting additional time for full initialization...")
@@ -302,7 +443,7 @@ def robust_steam_restart(progress_callback: Optional[Callable[[str], None]] = No
time.sleep(5) time.sleep(5)
elapsed_wait += 5 elapsed_wait += 5
if initial_wait_done and elapsed_wait >= 15: if initial_wait_done and elapsed_wait >= 15:
final_check = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=env) final_check = subprocess.run(['pgrep', '-f', 'steam'], capture_output=True, timeout=10, env=start_env)
if final_check.returncode == 0: if final_check.returncode == 0:
report("Steam started successfully.") report("Steam started successfully.")
logger.info("Steam confirmed running after wait.") logger.info("Steam confirmed running after wait.")

View File

@@ -0,0 +1,3 @@
"""Helper utilities for backend services."""

View File

@@ -0,0 +1,46 @@
"""
Utilities for detecting Nexus Premium requirement messages in engine output.
"""
from __future__ import annotations
_KEYWORD_PHRASES = (
"buy nexus premium",
"requires nexus premium",
"requires a nexus premium",
"nexus premium is required",
"nexus premium required",
"nexus mods premium is required",
"manual download", # Evaluated with additional context
)
def is_non_premium_indicator(line: str) -> bool:
"""
Return True if the engine output line indicates a Nexus non-premium scenario.
Args:
line: Raw line emitted from the jackify-engine process.
"""
if not line:
return False
normalized = line.strip().lower()
if not normalized:
return False
# Direct phrase detection
for phrase in _KEYWORD_PHRASES[:6]:
if phrase in normalized:
return True
if "nexus" in normalized and "premium" in normalized:
return True
# Manual download + Nexus URL implies premium requirement in current workflows.
if "manual download" in normalized and ("nexusmods.com" in normalized or "nexus mods" in normalized):
return True
return False

BIN
jackify/engine/Microsoft.CSharp.dll Normal file → Executable file

Binary file not shown.

Binary file not shown.

Binary file not shown.

BIN
jackify/engine/System.Collections.Concurrent.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Collections.Immutable.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Collections.NonGeneric.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Collections.Specialized.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Collections.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.ComponentModel.EventBasedAsync.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.ComponentModel.Primitives.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.ComponentModel.TypeConverter.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.ComponentModel.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Console.dll Normal file → Executable file

Binary file not shown.

Binary file not shown.

BIN
jackify/engine/System.Data.Common.dll Normal file → Executable file

Binary file not shown.

Binary file not shown.

BIN
jackify/engine/System.Diagnostics.FileVersionInfo.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Diagnostics.Process.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Diagnostics.StackTrace.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Diagnostics.TraceSource.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Drawing.Primitives.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Drawing.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Formats.Asn1.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.Compression.Brotli.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.Compression.ZipFile.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.Compression.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.FileSystem.DriveInfo.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.FileSystem.Watcher.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.MemoryMappedFiles.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.IO.Pipes.dll Normal file → Executable file

Binary file not shown.

Binary file not shown.

BIN
jackify/engine/System.Linq.Expressions.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Linq.Parallel.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Linq.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Memory.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Net.Http.Json.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Net.Http.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Net.Mail.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Net.NameResolution.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Net.NetworkInformation.dll Normal file → Executable file

Binary file not shown.

Binary file not shown.

BIN
jackify/engine/System.Net.Primitives.dll Normal file → Executable file

Binary file not shown.

BIN
jackify/engine/System.Net.Quic.dll Normal file → Executable file

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More