fd
This commit is contained in:
@@ -1,159 +0,0 @@
|
|||||||
# add-file.py Refactor Summary
|
|
||||||
|
|
||||||
## Changes Made
|
|
||||||
|
|
||||||
### 1. Removed `is_hydrus` Flag (Legacy Code Removal)
|
|
||||||
The `is_hydrus` boolean flag was a legacy indicator for Hydrus files that is no longer needed with the explicit hash+store pattern.
|
|
||||||
|
|
||||||
**Changes:**
|
|
||||||
- Updated `_resolve_source()` signature from returning `(path, is_hydrus, hash)` to `(path, hash)`
|
|
||||||
- Removed all `is_hydrus` logic throughout the file (11 occurrences)
|
|
||||||
- Updated `_is_url_target()` to no longer accept `is_hydrus` parameter
|
|
||||||
- Removed Hydrus-specific detection logic based on store name containing "hydrus"
|
|
||||||
|
|
||||||
**Rationale:** With explicit store names, we no longer need implicit Hydrus detection. The `store` field in PipeObject provides clear backend identification.
|
|
||||||
|
|
||||||
### 2. Added Comprehensive PipeObject Debugging
|
|
||||||
Added detailed debug logging throughout the execution flow to provide visibility into:
|
|
||||||
|
|
||||||
**PipeObject State After Creation:**
|
|
||||||
```
|
|
||||||
[add-file] PIPEOBJECT created:
|
|
||||||
hash=00beb438e3c0...
|
|
||||||
store=local
|
|
||||||
file_path=C:\Users\Admin\Downloads\Audio\yapping.m4a
|
|
||||||
tags=[]
|
|
||||||
title=None
|
|
||||||
extra keys=[]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Input Result Details:**
|
|
||||||
```
|
|
||||||
[add-file] INPUT result type=NoneType
|
|
||||||
```
|
|
||||||
|
|
||||||
**Parsed Arguments:**
|
|
||||||
```
|
|
||||||
[add-file] PARSED args: location=test, provider=None, delete=False
|
|
||||||
```
|
|
||||||
|
|
||||||
**Source Resolution:**
|
|
||||||
```
|
|
||||||
[add-file] RESOLVED source: path=C:\Users\Admin\Downloads\Audio\yapping.m4a, hash=N/A...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Execution Path Decision:**
|
|
||||||
```
|
|
||||||
[add-file] DECISION POINT: provider=None, location=test
|
|
||||||
media_path=C:\Users\Admin\Downloads\Audio\yapping.m4a, exists=True
|
|
||||||
Checking execution paths: provider_name=False, location_local=False, location_exists=True
|
|
||||||
```
|
|
||||||
|
|
||||||
**Route Selection:**
|
|
||||||
```
|
|
||||||
[add-file] ROUTE: location specified, checking type...
|
|
||||||
[add-file] _is_local_path check: location=test, slash=False, backslash=False, colon=False, result=False
|
|
||||||
[add-file] _is_storage_backend check: location=test, backends=['default', 'home', 'test'], result=True
|
|
||||||
[add-file] ROUTE: storage backend path
|
|
||||||
```
|
|
||||||
|
|
||||||
**Error Paths:**
|
|
||||||
```
|
|
||||||
[add-file] ERROR: No location or provider specified - all checks failed
|
|
||||||
[add-file] ERROR: Invalid location (not local path or storage backend): {location}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Fixed Critical Bug: Argument Parsing
|
|
||||||
**Problem:** The `-store` argument was not being recognized, causing "No storage location or provider specified" error.
|
|
||||||
|
|
||||||
**Root Cause:** Mismatch between argument definition and parsing:
|
|
||||||
- Argument defined as: `SharedArgs.STORE` (name="store")
|
|
||||||
- Code was looking for: `parsed.get("storage")`
|
|
||||||
|
|
||||||
**Fix:** Changed line 65 from:
|
|
||||||
```python
|
|
||||||
location = parsed.get("storage")
|
|
||||||
```
|
|
||||||
to:
|
|
||||||
```python
|
|
||||||
location = parsed.get("store") # Fixed: was "storage", should be "store"
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Enhanced Helper Method Debugging
|
|
||||||
|
|
||||||
**`_is_local_path()`:**
|
|
||||||
```python
|
|
||||||
debug(f"[add-file] _is_local_path check: location={location}, slash={has_slash}, backslash={has_backslash}, colon={has_colon}, result={result}")
|
|
||||||
```
|
|
||||||
|
|
||||||
**`_is_storage_backend()`:**
|
|
||||||
```python
|
|
||||||
debug(f"[add-file] _is_storage_backend check: location={location}, backends={backends}, result={is_backend}")
|
|
||||||
debug(f"[add-file] _is_storage_backend ERROR: {exc}") # On exception
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing Results
|
|
||||||
|
|
||||||
### Before Fix:
|
|
||||||
```
|
|
||||||
[add-file] PARSED args: location=None, provider=None, delete=False
|
|
||||||
[add-file] ERROR: No location or provider specified - all checks failed
|
|
||||||
No storage location or provider specified
|
|
||||||
```
|
|
||||||
|
|
||||||
### After Fix:
|
|
||||||
```
|
|
||||||
[add-file] PARSED args: location=test, provider=None, delete=False
|
|
||||||
[add-file] _is_storage_backend check: location=test, backends=['default', 'home', 'test'], result=True
|
|
||||||
[add-file] ROUTE: storage backend path
|
|
||||||
✓ File added to 'test': 00beb438e3c02cdc0340526deb0c51f916ffd6330259be4f350009869c5448d9
|
|
||||||
```
|
|
||||||
|
|
||||||
## Impact
|
|
||||||
|
|
||||||
### Files Modified:
|
|
||||||
- `cmdlets/add_file.py`: ~15 replacements across 350+ lines
|
|
||||||
|
|
||||||
### Backwards Compatibility:
|
|
||||||
- ✅ No breaking changes to command-line interface
|
|
||||||
- ✅ Existing pipelines continue to work
|
|
||||||
- ✅ Hash+store pattern fully enforced
|
|
||||||
|
|
||||||
### Code Quality Improvements:
|
|
||||||
1. **Removed Legacy Code:** Eliminated `is_hydrus` flag (11 occurrences)
|
|
||||||
2. **Enhanced Debugging:** Added 15+ debug statements for full execution visibility
|
|
||||||
3. **Fixed Critical Bug:** Corrected argument parsing mismatch
|
|
||||||
4. **Better Error Messages:** All error paths now have debug context
|
|
||||||
|
|
||||||
## Documentation
|
|
||||||
|
|
||||||
### Debug Output Legend:
|
|
||||||
- `[add-file] PIPEOBJECT created:` - Shows PipeObject state after coercion
|
|
||||||
- `[add-file] INPUT result type=` - Shows type of piped input
|
|
||||||
- `[add-file] PARSED args:` - Shows all parsed command-line arguments
|
|
||||||
- `[add-file] RESOLVED source:` - Shows resolved file path and hash
|
|
||||||
- `[add-file] DECISION POINT:` - Shows routing decision variables
|
|
||||||
- `[add-file] ROUTE:` - Shows which execution path is taken
|
|
||||||
- `[add-file] ERROR:` - Shows why operation failed
|
|
||||||
|
|
||||||
### Execution Paths:
|
|
||||||
1. **Provider Upload** (`provider_name` set) → `_handle_provider_upload()`
|
|
||||||
2. **Local Import** (`location == 'local'`) → `_handle_local_import()`
|
|
||||||
3. **Local Export** (location is path) → `_handle_local_export()`
|
|
||||||
4. **Storage Backend** (location is backend name) → `_handle_storage_backend()` ✓
|
|
||||||
5. **Error** (no location/provider) → Error message
|
|
||||||
|
|
||||||
## Verification Checklist
|
|
||||||
- [x] `is_hydrus` completely removed (0 occurrences)
|
|
||||||
- [x] All return tuples updated to exclude `is_hydrus`
|
|
||||||
- [x] Comprehensive PipeObject debugging added
|
|
||||||
- [x] Argument parsing bug fixed (`storage` → `store`)
|
|
||||||
- [x] Helper method debugging enhanced
|
|
||||||
- [x] Full execution path visibility achieved
|
|
||||||
- [x] Tested with real command: `add-file -path "..." -store test` ✓
|
|
||||||
|
|
||||||
## Related Refactorings
|
|
||||||
- **PIPELINE_REFACTOR_SUMMARY.md**: Removed backwards compatibility from pipeline.py
|
|
||||||
- **MODELS_REFACTOR_SUMMARY.md**: Refactored PipeObject to hash+store pattern
|
|
||||||
|
|
||||||
This refactor completes the trilogy of modernization efforts, ensuring add-file.py fully embraces the hash+store canonical pattern with zero legacy code.
|
|
||||||
@@ -1,100 +0,0 @@
|
|||||||
"""
|
|
||||||
Analysis: Export-Store vs Get-File cmdlet
|
|
||||||
|
|
||||||
=== FINDINGS ===
|
|
||||||
|
|
||||||
1. GET-FILE ALREADY EXISTS AND IS SUFFICIENT
|
|
||||||
- Located: cmdlets/get_file.py
|
|
||||||
- Purpose: Export files from any store backend to local path
|
|
||||||
- Usage: @1 | get-file -path C:\Downloads
|
|
||||||
- Supports: Explicit -path, configured output dir, custom filename
|
|
||||||
- Works with: All storage backends (Folder, HydrusNetwork, RemoteStorage)
|
|
||||||
|
|
||||||
2. ARCHITECTURE COMPARISON
|
|
||||||
|
|
||||||
GET-FILE (current):
|
|
||||||
✓ Takes hash + store name as input
|
|
||||||
✓ Queries backend.get_metadata(hash) to find file details
|
|
||||||
✓ For Folder: Returns direct Path from database
|
|
||||||
✓ For HydrusNetwork: Downloads to temp location via HTTP
|
|
||||||
✓ Outputs file to specified directory
|
|
||||||
✓ Supports both input modes: explicit (-hash, -store) and piped results
|
|
||||||
|
|
||||||
EXPORT-STORE (hypothetical):
|
|
||||||
✗ Would be redundant with get-file
|
|
||||||
✗ Would only work with HydrusNetwork (not Folder, Remote, etc.)
|
|
||||||
✗ No clear advantage over get-file's generic approach
|
|
||||||
✗ More specialized = less reusable
|
|
||||||
|
|
||||||
3. RECOMMENDED PATTERN
|
|
||||||
|
|
||||||
Sequence for moving files between stores:
|
|
||||||
|
|
||||||
search-store -store home | get-file -path /tmp/staging | add-file -storage test
|
|
||||||
|
|
||||||
This reads:
|
|
||||||
1. Search Hydrus "home" instance
|
|
||||||
2. Export matching files to staging
|
|
||||||
3. Import to Folder "test" storage
|
|
||||||
|
|
||||||
4. FINDINGS ON THE @2 SELECTION ERROR
|
|
||||||
|
|
||||||
Debug output shows:
|
|
||||||
"[debug] first-stage: sel=[1] rows=1 items=4"
|
|
||||||
|
|
||||||
This means:
|
|
||||||
- User selected @2 (second item, index=1 in 0-based)
|
|
||||||
- Table object had only 1 row
|
|
||||||
- But items_list had 4 items
|
|
||||||
|
|
||||||
CAUSE: Mismatch between displayed rows and internal items list
|
|
||||||
|
|
||||||
Possible reasons:
|
|
||||||
a) Table display was incomplete (only showed first row)
|
|
||||||
b) set_last_result_table() wasn't called correctly
|
|
||||||
c) search-store didn't add all 4 rows to table object
|
|
||||||
|
|
||||||
FIX: Add better validation in search-store and result table handling
|
|
||||||
|
|
||||||
5. DEBUG IMPROVEMENTS MADE
|
|
||||||
|
|
||||||
Added to add_file.py run() method:
|
|
||||||
- Log input result type and length
|
|
||||||
- Show first item details: title, hash (truncated), store
|
|
||||||
- Log resolved source details
|
|
||||||
- Show validation failures with context
|
|
||||||
|
|
||||||
This will help debug "no items matched" errors in future
|
|
||||||
|
|
||||||
6. STORE FIELD IN RESULTS
|
|
||||||
|
|
||||||
Current behavior:
|
|
||||||
- search-store results show store="hydrus" (generic)
|
|
||||||
- Should show store="home" or store="work" (specific instance)
|
|
||||||
|
|
||||||
Next improvement:
|
|
||||||
- Update search-store to use FileStorage.list_backends() logic
|
|
||||||
- Use dynamic store detection like .pipe cmdlet does
|
|
||||||
- Show actual instance names in results table
|
|
||||||
|
|
||||||
=== RECOMMENDATIONS ===
|
|
||||||
|
|
||||||
1. DO NOT create export-store cmdlet
|
|
||||||
- get-file is already generic and works for all backends
|
|
||||||
- Adding export-store adds confusion without benefit
|
|
||||||
|
|
||||||
2. DO improve search-store display
|
|
||||||
- Import FileStorage and populate store names correctly
|
|
||||||
- Show "home" instead of "hydrus" when result is from Hydrus instance
|
|
||||||
- Similar to the .pipe cmdlet refactoring
|
|
||||||
|
|
||||||
3. DO fix the selection/table registration issue
|
|
||||||
- Verify set_last_result_table() is being called with correct items list
|
|
||||||
- Ensure every row added to table has corresponding item
|
|
||||||
- Add validation: len(table.rows) == len(items_list)
|
|
||||||
|
|
||||||
4. DO use the new debug logs in add_file
|
|
||||||
- Run: @2 | add-file -storage test
|
|
||||||
- Observe: [add-file] INPUT result details
|
|
||||||
- This will show if result is coming through correctly
|
|
||||||
"""
|
|
||||||
@@ -1,127 +0,0 @@
|
|||||||
DEBUGGING IMPROVEMENTS IMPLEMENTED
|
|
||||||
==================================
|
|
||||||
|
|
||||||
1. ENHANCED ADD-FILE DEBUG LOGGING
|
|
||||||
=================================
|
|
||||||
|
|
||||||
Now logs when cmdlet is executed:
|
|
||||||
- INPUT result type (list, dict, PipeObject, None, etc.)
|
|
||||||
- List length if applicable
|
|
||||||
- First item details: title, hash (first 12 chars), store
|
|
||||||
- Resolved source: path/URL, whether from Hydrus, hash value
|
|
||||||
- Error details if resolution or validation fails
|
|
||||||
|
|
||||||
Example output:
|
|
||||||
[add-file] INPUT result type=list
|
|
||||||
[add-file] INPUT result is list with 4 items
|
|
||||||
[add-file] First item details: title=i ve been down, hash=b0780e68a2dc..., store=hydrus
|
|
||||||
[add-file] RESOLVED source: path=None, is_hydrus=True, hash=b0780e68a2dc...
|
|
||||||
[add-file] ERROR: Source validation failed for None
|
|
||||||
|
|
||||||
This will help identify:
|
|
||||||
- Where the result is being lost
|
|
||||||
- If hash is being extracted correctly
|
|
||||||
- Which store the file comes from
|
|
||||||
|
|
||||||
2. ENHANCED SEARCH-STORE DEBUG LOGGING
|
|
||||||
===================================
|
|
||||||
|
|
||||||
Now logs after building results:
|
|
||||||
- Number of table rows added
|
|
||||||
- Number of items in results_list
|
|
||||||
- WARNING if there's a mismatch
|
|
||||||
|
|
||||||
Example output:
|
|
||||||
[search-store] Added 4 rows to table, 4 items to results_list
|
|
||||||
[search-store] WARNING: Table/items mismatch! rows=1 items=4
|
|
||||||
|
|
||||||
This directly debugs the "@2 selection" issue:
|
|
||||||
- Will show if table/items registration is correct
|
|
||||||
- Helps diagnose why only 1 row shows when 4 items exist
|
|
||||||
|
|
||||||
3. ROOT CAUSE ANALYSIS: "@2 SELECTION FAILED"
|
|
||||||
==========================================
|
|
||||||
|
|
||||||
Your debug output showed:
|
|
||||||
[debug] first-stage: sel=[1] rows=1 items=4
|
|
||||||
|
|
||||||
This means:
|
|
||||||
- search-store found 4 results
|
|
||||||
- But only 1 row registered in table for selection
|
|
||||||
- User selected @2 (index 1) which is valid (0-4)
|
|
||||||
- But table only had 1 row, so selection was out of bounds
|
|
||||||
|
|
||||||
The mismatch is between:
|
|
||||||
- What's displayed to the user (seems like 4 rows based on output)
|
|
||||||
- What's registered for @N selection (only 1 row)
|
|
||||||
|
|
||||||
With the new debug logging, running the same command will show:
|
|
||||||
[search-store] Added X rows to table, Y items to results_list
|
|
||||||
|
|
||||||
If X=1 and Y=4, then search-store isn't adding all results to the table
|
|
||||||
If X=4 and Y=4, then the issue is in CLI selection logic
|
|
||||||
|
|
||||||
4. NEXT DEBUGGING STEPS
|
|
||||||
===================
|
|
||||||
|
|
||||||
To diagnose the "@2 selection" issue:
|
|
||||||
|
|
||||||
1. Run: search-store system:limit=5
|
|
||||||
2. Look for: [search-store] Added X rows...
|
|
||||||
3. Compare X to number of rows shown in table
|
|
||||||
4. If X < display_rows: Problem is in table.add_result()
|
|
||||||
5. If X == display_rows: Problem is in CLI selection mapping
|
|
||||||
|
|
||||||
After running add-file:
|
|
||||||
|
|
||||||
1. Run: @2 | add-file -storage test
|
|
||||||
2. Look for: [add-file] INPUT result details
|
|
||||||
3. Check if hash, title, and store are extracted
|
|
||||||
4. If missing: Problem is in result object structure
|
|
||||||
5. If present: Problem is in _resolve_source() logic
|
|
||||||
|
|
||||||
5. ARCHITECTURE DECISION: EXPORT-STORE CMDLET
|
|
||||||
==========================================
|
|
||||||
|
|
||||||
Recommendation: DO NOT CREATE EXPORT-STORE
|
|
||||||
|
|
||||||
Reason: get-file already provides this functionality
|
|
||||||
|
|
||||||
get-file:
|
|
||||||
- Takes hash + store name
|
|
||||||
- Retrieves from any backend (Folder, HydrusNetwork, Remote, etc.)
|
|
||||||
- Exports to specified path
|
|
||||||
- Works for all storage types
|
|
||||||
- Already tested and working
|
|
||||||
|
|
||||||
Example workflow for moving files between stores:
|
|
||||||
$ search-store -store home | get-file -path /tmp | add-file -storage test
|
|
||||||
|
|
||||||
This is cleaner than having specialized export-store cmdlet
|
|
||||||
|
|
||||||
6. FUTURE IMPROVEMENTS
|
|
||||||
===================
|
|
||||||
|
|
||||||
Based on findings:
|
|
||||||
|
|
||||||
a) Update search-store to show specific instance names
|
|
||||||
Currently: store="hydrus"
|
|
||||||
Should be: store="home" or store="work"
|
|
||||||
Implementation: Use FileStorage to detect which instance
|
|
||||||
|
|
||||||
b) Fix selection/table registration validation
|
|
||||||
Add assertion: len(table.rows) == len(results_list)
|
|
||||||
Fail fast if mismatch detected
|
|
||||||
|
|
||||||
c) Enhance add-file to handle Hydrus imports
|
|
||||||
Current: Needs file path on local filesystem
|
|
||||||
Future: Should support add-file -hash <hash> -store home
|
|
||||||
This would copy from one Hydrus instance to another
|
|
||||||
|
|
||||||
SUMMARY
|
|
||||||
=======
|
|
||||||
|
|
||||||
✓ Better debug logging in add-file and search-store
|
|
||||||
✓ Root cause identified for "@2 selection" issue
|
|
||||||
✓ Confirmed get-file is sufficient (no export-store needed)
|
|
||||||
✓ Path forward: Use new logging to identify exact failure point
|
|
||||||
@@ -1,222 +0,0 @@
|
|||||||
# Hash+Store Priority Pattern & Database Connection Fixes
|
|
||||||
|
|
||||||
## Summary of Changes
|
|
||||||
|
|
||||||
### 1. Database Connection Leak Fixes ✅
|
|
||||||
|
|
||||||
**Problem:** FolderDB connections were not being properly closed, causing database locks and resource leaks.
|
|
||||||
|
|
||||||
**Files Fixed:**
|
|
||||||
- `cmdlets/search_store.py` - Now uses `with FolderDB()` context manager
|
|
||||||
- `cmdlets/search_provider.py` - Now uses `with FolderDB()` context manager
|
|
||||||
- `helper/store.py` (Folder.__init__) - Now uses `with FolderDB()` for temporary connections
|
|
||||||
- `helper/worker_manager.py` - Added `close()` method and context manager support (`__enter__`/`__exit__`)
|
|
||||||
|
|
||||||
**Pattern:**
|
|
||||||
```python
|
|
||||||
# OLD (leaked connections):
|
|
||||||
db = FolderDB(path)
|
|
||||||
try:
|
|
||||||
db.do_something()
|
|
||||||
finally:
|
|
||||||
if db:
|
|
||||||
db.close() # Could be skipped if exception occurs early
|
|
||||||
|
|
||||||
# NEW (guaranteed cleanup):
|
|
||||||
with FolderDB(path) as db:
|
|
||||||
db.do_something()
|
|
||||||
# Connection automatically closed when exiting block
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Hash+Store Priority Pattern ✅
|
|
||||||
|
|
||||||
**Philosophy:** The hash+store pair is the **canonical identifier** for files across all storage backends. Sort order and table structure should not matter because we're always using hash+store.
|
|
||||||
|
|
||||||
**Why This Matters:**
|
|
||||||
- `@N` selections pass hash+store from search results
|
|
||||||
- Hash+store works consistently across all backends (Hydrus, Folder, Remote)
|
|
||||||
- Path-based resolution is fragile (files move, temp paths expire, etc.)
|
|
||||||
- Hash+store never changes and uniquely identifies content
|
|
||||||
|
|
||||||
**Updated Resolution Priority in `add_file.py`:**
|
|
||||||
|
|
||||||
```python
|
|
||||||
def _resolve_source(result, path_arg, pipe_obj, config):
|
|
||||||
"""
|
|
||||||
PRIORITY 1: hash+store from result dict (most reliable for @N selections)
|
|
||||||
- Checks result.get("hash") and result.get("store")
|
|
||||||
- Uses FileStorage[store].get_file(hash) to retrieve
|
|
||||||
- Works for: Hydrus, Folder, Remote backends
|
|
||||||
|
|
||||||
PRIORITY 2: Explicit -path argument
|
|
||||||
- Direct path specified by user
|
|
||||||
|
|
||||||
PRIORITY 3: pipe_obj.file_path
|
|
||||||
- Legacy path from previous pipeline stage
|
|
||||||
|
|
||||||
PRIORITY 4: Hydrus hash from pipe_obj.extra
|
|
||||||
- Fallback for older Hydrus workflows
|
|
||||||
|
|
||||||
PRIORITY 5: String/list result parsing
|
|
||||||
- Last resort for simple string paths
|
|
||||||
"""
|
|
||||||
```
|
|
||||||
|
|
||||||
**Example Flow:**
|
|
||||||
```bash
|
|
||||||
# User searches and selects result
|
|
||||||
$ search-store system:limit=5
|
|
||||||
|
|
||||||
# Result items include:
|
|
||||||
{
|
|
||||||
"hash": "a1b2c3d4...",
|
|
||||||
"store": "home", # Specific Hydrus instance
|
|
||||||
"title": "example.mp4"
|
|
||||||
}
|
|
||||||
|
|
||||||
# User selects @2 (index 1)
|
|
||||||
$ @2 | add-file -storage test
|
|
||||||
|
|
||||||
# add-file now:
|
|
||||||
1. Extracts hash="a1b2c3d4..." store="home" from result dict
|
|
||||||
2. Calls FileStorage["home"].get_file("a1b2c3d4...")
|
|
||||||
3. Retrieves actual file path from "home" backend
|
|
||||||
4. Proceeds with copy/upload to "test" storage
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Benefits of This Approach
|
|
||||||
|
|
||||||
**Consistency:**
|
|
||||||
- @N selection always uses the same hash+store regardless of display order
|
|
||||||
- No confusion about which row index maps to which file
|
|
||||||
- Table synchronization issues (rows vs items) don't break selection
|
|
||||||
|
|
||||||
**Reliability:**
|
|
||||||
- Hash uniquely identifies content (SHA256 collision is effectively impossible)
|
|
||||||
- Store identifies the authoritative source backend
|
|
||||||
- No dependency on temporary paths or file locations
|
|
||||||
|
|
||||||
**Multi-Instance Support:**
|
|
||||||
- Works seamlessly with multiple Hydrus instances ("home", "work")
|
|
||||||
- Works with mixed backends (Hydrus + Folder + Remote)
|
|
||||||
- Each backend can independently retrieve file by hash
|
|
||||||
|
|
||||||
**Debugging:**
|
|
||||||
- Hash+store are visible in debug logs: `[add-file] Using hash+store: hash=a1b2c3d4..., store=home`
|
|
||||||
- Easy to trace which backend is being queried
|
|
||||||
- Clear error messages when hash+store lookup fails
|
|
||||||
|
|
||||||
## How @N Selection Works Now
|
|
||||||
|
|
||||||
### Selection Process:
|
|
||||||
|
|
||||||
1. **Search creates result list with hash+store:**
|
|
||||||
```python
|
|
||||||
results_list = [
|
|
||||||
{"hash": "abc123...", "store": "home", "title": "file1.mp4"},
|
|
||||||
{"hash": "def456...", "store": "default", "title": "file2.jpg"},
|
|
||||||
{"hash": "ghi789...", "store": "test", "title": "file3.png"},
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **User selects @2 (second item, index 1):**
|
|
||||||
- CLI extracts: `result = {"hash": "def456...", "store": "default", "title": "file2.jpg"}`
|
|
||||||
- Passes this dict to the next cmdlet
|
|
||||||
|
|
||||||
3. **Next cmdlet receives dict with hash+store:**
|
|
||||||
```python
|
|
||||||
def run(self, result, args, config):
|
|
||||||
# result is the dict from selection
|
|
||||||
file_hash = result.get("hash") # "def456..."
|
|
||||||
store_name = result.get("store") # "default"
|
|
||||||
|
|
||||||
# Use hash+store to retrieve file
|
|
||||||
backend = FileStorage(config)[store_name]
|
|
||||||
file_path = backend.get_file(file_hash)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Why This is Better Than Path-Based:
|
|
||||||
|
|
||||||
**Path-Based (OLD):**
|
|
||||||
```python
|
|
||||||
# Fragile: path could be temp file, symlink, moved file, etc.
|
|
||||||
result = {"file_path": "/tmp/hydrus-abc123.mp4"}
|
|
||||||
# What if file was moved? What if it's a temp path that expires?
|
|
||||||
```
|
|
||||||
|
|
||||||
**Hash+Store (NEW):**
|
|
||||||
```python
|
|
||||||
# Reliable: hash+store always works regardless of current location
|
|
||||||
result = {"hash": "abc123...", "store": "home"}
|
|
||||||
# Backend retrieves current location from its database/API
|
|
||||||
```
|
|
||||||
|
|
||||||
## Testing the Fixes
|
|
||||||
|
|
||||||
### 1. Test Database Connections:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Search multiple times and check for database locks
|
|
||||||
search-store system:limit=5
|
|
||||||
search-store system:limit=5
|
|
||||||
search-store system:limit=5
|
|
||||||
|
|
||||||
# Should complete without "database is locked" errors
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Test Hash+Store Selection:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# Search and select
|
|
||||||
search-store system:limit=5
|
|
||||||
@2 | get-metadata
|
|
||||||
|
|
||||||
# Should show metadata for the selected file using hash+store
|
|
||||||
# Debug log should show: [add-file] Using hash+store from result: hash=...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Test WorkerManager Cleanup:
|
|
||||||
|
|
||||||
```powershell
|
|
||||||
# In Python script:
|
|
||||||
from helper.worker_manager import WorkerManager
|
|
||||||
from pathlib import Path
|
|
||||||
|
|
||||||
with WorkerManager(Path("C:/path/to/library")) as wm:
|
|
||||||
# Do work
|
|
||||||
pass
|
|
||||||
# Database automatically closed when exiting block
|
|
||||||
```
|
|
||||||
|
|
||||||
## Cmdlets That Already Use Hash+Store Pattern
|
|
||||||
|
|
||||||
These cmdlets already correctly extract hash+store:
|
|
||||||
- ✅ `get-file` - Export file via hash+store
|
|
||||||
- ✅ `get-metadata` - Retrieve metadata via hash+store
|
|
||||||
- ✅ `get-url` - Get url via hash+store
|
|
||||||
- ✅ `get-tag` - Get tags via hash+store
|
|
||||||
- ✅ `add-url` - Add URL via hash+store
|
|
||||||
- ✅ `delete-url` - Delete URL via hash+store
|
|
||||||
- ✅ `add-file` - **NOW UPDATED** to prioritize hash+store
|
|
||||||
|
|
||||||
## Future Improvements
|
|
||||||
|
|
||||||
1. **Make hash+store mandatory in result dicts:**
|
|
||||||
- All search cmdlets should emit hash+store
|
|
||||||
- Validate that result dicts include these fields
|
|
||||||
|
|
||||||
2. **Add hash+store validation:**
|
|
||||||
- Warn if hash is not 64-char hex string
|
|
||||||
- Warn if store is not a registered backend
|
|
||||||
|
|
||||||
3. **Standardize error messages:**
|
|
||||||
- "File not found via hash+store: hash=abc123 store=home"
|
|
||||||
- Makes debugging much clearer
|
|
||||||
|
|
||||||
4. **Consider deprecating path-based workflows:**
|
|
||||||
- Migrate legacy cmdlets to hash+store pattern
|
|
||||||
- Remove path-based fallbacks once all cmdlets updated
|
|
||||||
|
|
||||||
## Key Takeaway
|
|
||||||
|
|
||||||
**The hash+store pair is now the primary way to identify and retrieve files across the entire system.** This makes the codebase more reliable, consistent, and easier to debug. Database connections are properly cleaned up to prevent locks and resource leaks.
|
|
||||||
@@ -1,127 +0,0 @@
|
|||||||
# Models.py Refactoring Summary
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Refactored `models.py` PipeObject class to align with the hash+store canonical pattern, removing all backwards compatibility and legacy code.
|
|
||||||
|
|
||||||
## PipeObject Changes
|
|
||||||
|
|
||||||
### Removed Legacy Fields
|
|
||||||
- ❌ `source` - Replaced with `store` (storage backend name)
|
|
||||||
- ❌ `identifier` - Replaced with `hash` (SHA-256 hash)
|
|
||||||
- ❌ `file_hash` - Replaced with `hash` (canonical field)
|
|
||||||
- ❌ `remote_metadata` - Removed (can go in metadata dict or extra)
|
|
||||||
- ❌ `mpv_metadata` - Removed (can go in metadata dict or extra)
|
|
||||||
- ❌ `king_hash` - Moved to relationships dict
|
|
||||||
- ❌ `alt_hashes` - Moved to relationships dict
|
|
||||||
- ❌ `related_hashes` - Moved to relationships dict
|
|
||||||
- ❌ `parent_id` - Renamed to `parent_hash` for consistency
|
|
||||||
|
|
||||||
### New Canonical Fields
|
|
||||||
```python
|
|
||||||
@dataclass(slots=True)
|
|
||||||
class PipeObject:
|
|
||||||
hash: str # SHA-256 hash (canonical identifier)
|
|
||||||
store: str # Storage backend name (e.g., 'default', 'hydrus', 'test')
|
|
||||||
tags: List[str]
|
|
||||||
title: Optional[str]
|
|
||||||
source_url: Optional[str]
|
|
||||||
duration: Optional[float]
|
|
||||||
metadata: Dict[str, Any]
|
|
||||||
warnings: List[str]
|
|
||||||
file_path: Optional[str]
|
|
||||||
relationships: Dict[str, Any] # Contains king/alt/related
|
|
||||||
is_temp: bool
|
|
||||||
action: Optional[str]
|
|
||||||
parent_hash: Optional[str] # Renamed from parent_id
|
|
||||||
extra: Dict[str, Any]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Updated Methods
|
|
||||||
|
|
||||||
#### Removed
|
|
||||||
- ❌ `register_as_king(file_hash)` - Replaced with `add_relationship()`
|
|
||||||
- ❌ `add_alternate(alt_hash)` - Replaced with `add_relationship()`
|
|
||||||
- ❌ `add_related(related_hash)` - Replaced with `add_relationship()`
|
|
||||||
- ❌ `@property hash` - Now a direct field
|
|
||||||
- ❌ `as_dict()` - Removed backwards compatibility alias
|
|
||||||
- ❌ `to_serializable()` - Removed backwards compatibility alias
|
|
||||||
|
|
||||||
#### Added/Updated
|
|
||||||
- ✅ `add_relationship(rel_type, rel_hash)` - Generic relationship management
|
|
||||||
- ✅ `get_relationships()` - Returns copy of relationships dict
|
|
||||||
- ✅ `to_dict()` - Updated to serialize new fields
|
|
||||||
|
|
||||||
## Updated Files
|
|
||||||
|
|
||||||
### cmdlets/_shared.py
|
|
||||||
- Updated `coerce_to_pipe_object()` to use hash+store pattern
|
|
||||||
- Now computes hash from file_path if not provided
|
|
||||||
- Extracts relationships dict instead of individual king/alt/related fields
|
|
||||||
- Removes all references to source/identifier/file_hash
|
|
||||||
|
|
||||||
### cmdlets/add_file.py
|
|
||||||
- Updated `_update_pipe_object_destination()` signature to use hash/store
|
|
||||||
- Updated `_resolve_source()` to use pipe_obj.hash
|
|
||||||
- Updated `_prepare_metadata()` to use pipe_obj.hash
|
|
||||||
- Updated `_resolve_file_hash()` to check pipe_obj.hash
|
|
||||||
- Updated all call sites to pass hash/store instead of source/identifier/file_hash
|
|
||||||
|
|
||||||
### cmdlets/add_tag.py & cmdlets/add_tags.py
|
|
||||||
- Updated to access `res.hash` instead of `res.file_hash`
|
|
||||||
- Updated dict access to use `get('hash')` instead of `get('file_hash')`
|
|
||||||
|
|
||||||
### cmdlets/trim_file.py
|
|
||||||
- Updated to access `item.hash` instead of `item.file_hash`
|
|
||||||
- Updated dict access to use `get('hash')` only
|
|
||||||
|
|
||||||
### metadata.py
|
|
||||||
- Updated IMDb, MusicBrainz, and OpenLibrary tag extraction to return dicts directly
|
|
||||||
- Removed PipeObject instantiation with old signature (source/identifier)
|
|
||||||
- Updated remote metadata function to return dict instead of using PipeObject
|
|
||||||
|
|
||||||
## Benefits
|
|
||||||
|
|
||||||
1. **Canonical Pattern**: All file operations now use hash+store as the single source of truth
|
|
||||||
2. **Simplified Model**: Removed 9 legacy fields, consolidated into 2 canonical fields + relationships dict
|
|
||||||
3. **Consistency**: All cmdlets now use the same hash+store pattern for identification
|
|
||||||
4. **Maintainability**: One code path, no backwards compatibility burden
|
|
||||||
5. **Type Safety**: Direct fields instead of computed properties
|
|
||||||
6. **Flexibility**: Relationships dict allows for extensible relationship types
|
|
||||||
|
|
||||||
## Migration Notes
|
|
||||||
|
|
||||||
### Old Code
|
|
||||||
```python
|
|
||||||
pipe_obj = PipeObject(
|
|
||||||
source="hydrus",
|
|
||||||
identifier=file_hash,
|
|
||||||
file_hash=file_hash,
|
|
||||||
king_hash=king,
|
|
||||||
alt_hashes=[alt1, alt2]
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### New Code
|
|
||||||
```python
|
|
||||||
pipe_obj = PipeObject(
|
|
||||||
hash=file_hash,
|
|
||||||
store="hydrus",
|
|
||||||
relationships={
|
|
||||||
"king": king,
|
|
||||||
"alt": [alt1, alt2]
|
|
||||||
}
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Accessing Fields
|
|
||||||
| Old | New |
|
|
||||||
|-----|-----|
|
|
||||||
| `obj.file_hash` | `obj.hash` |
|
|
||||||
| `obj.source` | `obj.store` |
|
|
||||||
| `obj.identifier` | `obj.hash` |
|
|
||||||
| `obj.king_hash` | `obj.relationships.get("king")` |
|
|
||||||
| `obj.alt_hashes` | `obj.relationships.get("alt", [])` |
|
|
||||||
| `obj.parent_id` | `obj.parent_hash` |
|
|
||||||
|
|
||||||
## Zero Backwards Compatibility
|
|
||||||
As requested, **all backwards compatibility has been removed**. Old code using the previous PipeObject signature will need to be updated to use hash+store.
|
|
||||||
@@ -1,79 +0,0 @@
|
|||||||
NEXT DEBUGGING SESSION
|
|
||||||
======================
|
|
||||||
|
|
||||||
Run these commands in sequence and watch the [add-file] and [search-store] debug logs:
|
|
||||||
|
|
||||||
Step 1: Search and observe table/items mismatch
|
|
||||||
------
|
|
||||||
$ search-store system:limit=5
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
- Should see your 4 items in the table
|
|
||||||
- Watch for: [search-store] Added X rows to table, Y items to results_list
|
|
||||||
- If X=1 and Y=4: Problem is in table.add_result() or _ensure_storage_columns()
|
|
||||||
- If X=4 and Y=4: Problem is in CLI selection mapping (elsewhere)
|
|
||||||
|
|
||||||
Step 2: Test selection with debugging
|
|
||||||
------
|
|
||||||
$ @2 | add-file -storage test
|
|
||||||
|
|
||||||
Expected output:
|
|
||||||
- [add-file] INPUT result details should show the item you selected
|
|
||||||
- [add-file] RESOLVED source should have hash and store
|
|
||||||
- If either is missing/wrong: result object structure is wrong
|
|
||||||
- If both are correct: problem is in source resolution logic
|
|
||||||
|
|
||||||
Step 3: If selection works
|
|
||||||
------
|
|
||||||
If you successfully select @2 and add-file processes it:
|
|
||||||
- Congratulations! The issue was a one-time glitch
|
|
||||||
- If it fails again, compare debug logs to this run
|
|
||||||
|
|
||||||
Step 4: If selection still fails
|
|
||||||
------
|
|
||||||
Collect these logs:
|
|
||||||
1. Output of: search-store system:limit=5
|
|
||||||
2. Output of: @2 | add-file -storage test
|
|
||||||
3. Run diagnostic command to verify table state:
|
|
||||||
$ search-store system:limit=5 | .pipe
|
|
||||||
(This will show what .pipe sees in the results)
|
|
||||||
|
|
||||||
Step 5: Understanding @N selection format
|
|
||||||
------
|
|
||||||
When you see: [debug] first-stage: sel=[1] rows=1 items=4
|
|
||||||
- sel=[1] means you selected @2 (0-based index: @2 = index 1)
|
|
||||||
- rows=1 means the table object has only 1 row registered
|
|
||||||
- items=4 means there are 4 items in the results_list
|
|
||||||
|
|
||||||
The fix depends on which is wrong:
|
|
||||||
- If rows should be 4: table.add_result() isn't adding rows
|
|
||||||
- If items should be 1: results are being duplicated somehow
|
|
||||||
|
|
||||||
QUICK REFERENCE: DEBUGGING COMMANDS
|
|
||||||
===================================
|
|
||||||
|
|
||||||
Show debug logs:
|
|
||||||
$ debug on
|
|
||||||
$ search-store system:limit=5
|
|
||||||
$ @2 | add-file -storage test
|
|
||||||
|
|
||||||
Check what @2 selection resolves to:
|
|
||||||
$ @2 | get-metadata
|
|
||||||
|
|
||||||
Alternative (bypass @N selection issue):
|
|
||||||
$ search-store system:limit=5 | get-metadata -store home | .pipe
|
|
||||||
|
|
||||||
This avoids the @N selection and directly pipes results through cmdlets.
|
|
||||||
|
|
||||||
EXPECTED BEHAVIOR
|
|
||||||
================
|
|
||||||
|
|
||||||
Correct sequence when selection works:
|
|
||||||
1. search-store finds 4 results
|
|
||||||
2. [search-store] Added 4 rows to table, 4 items to results_list
|
|
||||||
3. @2 selects item at index 1 (second item: "i ve been down")
|
|
||||||
4. [add-file] INPUT result is dict: title=i ve been down, hash=b0780e68a2dc..., store=hydrus
|
|
||||||
5. [add-file] RESOLVED source: path=/tmp/medios-hydrus/..., is_hydrus=True, hash=b0780e68a2dc...
|
|
||||||
6. File is successfully added to "test" storage
|
|
||||||
|
|
||||||
If you see different output, the logs will show exactly where it diverges.
|
|
||||||
@@ -1,127 +0,0 @@
|
|||||||
# Pipeline Refactoring Summary
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
Refactored `pipeline.py` to remove all backwards compatibility and legacy code, consolidating on a single modern context-based approach using `PipelineStageContext`.
|
|
||||||
|
|
||||||
## Changes Made
|
|
||||||
|
|
||||||
### 1. Removed Legacy Global Variables
|
|
||||||
- ❌ `_PIPE_EMITS` - Replaced with `PipelineStageContext.emits`
|
|
||||||
- ❌ `_PIPE_ACTIVE` - Replaced with checking `_CURRENT_CONTEXT is not None`
|
|
||||||
- ❌ `_PIPE_IS_LAST` - Replaced with `PipelineStageContext.is_last_stage`
|
|
||||||
- ❌ `_LAST_PIPELINE_CAPTURE` - Removed (unused ephemeral handoff)
|
|
||||||
|
|
||||||
### 2. Removed Legacy Functions
|
|
||||||
- ❌ `set_active(bool)` - No longer needed, context tracks this
|
|
||||||
- ❌ `set_last_stage(bool)` - No longer needed, context tracks this
|
|
||||||
- ❌ `set_last_capture(obj)` - Removed
|
|
||||||
- ❌ `get_last_capture()` - Removed
|
|
||||||
|
|
||||||
### 3. Updated Core Functions
|
|
||||||
|
|
||||||
#### `emit(obj)`
|
|
||||||
**Before:** Dual-path with fallback to legacy `_PIPE_EMITS`
|
|
||||||
```python
|
|
||||||
if _CURRENT_CONTEXT is not None:
|
|
||||||
_CURRENT_CONTEXT.emit(obj)
|
|
||||||
return
|
|
||||||
_PIPE_EMITS.append(obj) # Legacy fallback
|
|
||||||
```
|
|
||||||
|
|
||||||
**After:** Single context-based path
|
|
||||||
```python
|
|
||||||
if _CURRENT_CONTEXT is not None:
|
|
||||||
_CURRENT_CONTEXT.emit(obj)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `emit_list(objects)`
|
|
||||||
**Before:** Dual-path with legacy fallback
|
|
||||||
**After:** Single context-based path, removed duplicate definition
|
|
||||||
|
|
||||||
#### `print_if_visible()`
|
|
||||||
**Before:** Checked `_PIPE_ACTIVE` and `_PIPE_IS_LAST`
|
|
||||||
```python
|
|
||||||
should_print = (not _PIPE_ACTIVE) or _PIPE_IS_LAST
|
|
||||||
```
|
|
||||||
|
|
||||||
**After:** Uses context state
|
|
||||||
```python
|
|
||||||
should_print = (_CURRENT_CONTEXT is None) or (_CURRENT_CONTEXT.is_last_stage)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `get_emitted_items()`
|
|
||||||
**Before:** Returned `_PIPE_EMITS`
|
|
||||||
**After:** Returns `_CURRENT_CONTEXT.emits` if context exists
|
|
||||||
|
|
||||||
#### `clear_emits()`
|
|
||||||
**Before:** Cleared global `_PIPE_EMITS`
|
|
||||||
**After:** Clears `_CURRENT_CONTEXT.emits` if context exists
|
|
||||||
|
|
||||||
#### `reset()`
|
|
||||||
**Before:** Reset 10+ legacy variables
|
|
||||||
**After:** Only resets active state variables, sets `_CURRENT_CONTEXT = None`
|
|
||||||
|
|
||||||
### 4. Updated Call Sites
|
|
||||||
|
|
||||||
#### TUI/pipeline_runner.py
|
|
||||||
**Before:**
|
|
||||||
```python
|
|
||||||
ctx.set_stage_context(pipeline_ctx)
|
|
||||||
ctx.set_active(True)
|
|
||||||
ctx.set_last_stage(index == total - 1)
|
|
||||||
# ...
|
|
||||||
ctx.set_stage_context(None)
|
|
||||||
ctx.set_active(False)
|
|
||||||
```
|
|
||||||
|
|
||||||
**After:**
|
|
||||||
```python
|
|
||||||
ctx.set_stage_context(pipeline_ctx)
|
|
||||||
# ...
|
|
||||||
ctx.set_stage_context(None)
|
|
||||||
```
|
|
||||||
|
|
||||||
#### CLI.py (2 locations)
|
|
||||||
**Before:**
|
|
||||||
```python
|
|
||||||
ctx.set_stage_context(pipeline_ctx)
|
|
||||||
ctx.set_active(True)
|
|
||||||
```
|
|
||||||
|
|
||||||
**After:**
|
|
||||||
```python
|
|
||||||
ctx.set_stage_context(pipeline_ctx)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Result
|
|
||||||
|
|
||||||
### Code Reduction
|
|
||||||
- Removed ~15 lines of legacy global variable declarations
|
|
||||||
- Removed ~30 lines of legacy function definitions
|
|
||||||
- Removed ~10 lines of dual-path logic in core functions
|
|
||||||
- Removed ~8 lines of redundant function calls at call sites
|
|
||||||
|
|
||||||
### Benefits
|
|
||||||
1. **Single Source of Truth**: All pipeline state is now in `PipelineStageContext`
|
|
||||||
2. **Cleaner API**: No redundant `set_active()` / `set_last_stage()` calls needed
|
|
||||||
3. **Type Safety**: Context object provides better type hints and IDE support
|
|
||||||
4. **Maintainability**: One code path to maintain, no backwards compatibility burden
|
|
||||||
5. **Clarity**: Intent is clear - context manages all stage-related state
|
|
||||||
|
|
||||||
## Preserved Functionality
|
|
||||||
All user-facing functionality remains unchanged:
|
|
||||||
- ✅ @N selection syntax
|
|
||||||
- ✅ Result table history (@.. and @,,)
|
|
||||||
- ✅ Display overlays
|
|
||||||
- ✅ Pipeline value storage/retrieval
|
|
||||||
- ✅ Worker attribution
|
|
||||||
- ✅ UI refresh callbacks
|
|
||||||
- ✅ Pending pipeline tail preservation
|
|
||||||
|
|
||||||
## Type Checking Notes
|
|
||||||
Some type checker warnings remain about accessing attributes on Optional types (e.g., `_LAST_RESULT_TABLE.source_command`). These are safe because:
|
|
||||||
1. Code uses `_is_selectable_table()` runtime checks before access
|
|
||||||
2. Functions check `is not None` before attribute access
|
|
||||||
3. These warnings are false positives from static analysis
|
|
||||||
|
|
||||||
These do not represent actual runtime bugs.
|
|
||||||
69
README.md
69
README.md
@@ -1,69 +0,0 @@
|
|||||||
# Medios-Macina
|
|
||||||
- Audio
|
|
||||||
- Video
|
|
||||||
- Image
|
|
||||||
- Text
|
|
||||||
|
|
||||||
### Search Storage Support
|
|
||||||
- HydrusNetwork https://github.com/hydrusnetwork/hydrus
|
|
||||||
- All-Debrid https://alldebrid.com/
|
|
||||||
- Local drive
|
|
||||||
|
|
||||||
### Search Provider Support
|
|
||||||
- Youtube
|
|
||||||
- Openlibrary/Archive.org (free account needed)
|
|
||||||
- Soulseek
|
|
||||||
- Gog-Games (limited without paid API)
|
|
||||||
- Libgen
|
|
||||||
|
|
||||||
### Features
|
|
||||||
- Full MPV integration https://github.com/mpv-player/mpv
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Install what you need and want, after you have the requirements.txt installed as well you will need to open terminal at the repository download location and run the cli file like .
|
|
||||||
|
|
||||||
|
|
||||||
#### Quick
|
|
||||||
|
|
||||||
```shell
|
|
||||||
cd "C:\location\to\repository\medios-machina\"
|
|
||||||
python cli.py
|
|
||||||
```
|
|
||||||
Adding your first file
|
|
||||||
```python
|
|
||||||
.pipe -list # List MPV current playing/list
|
|
||||||
.pipe -save # Save current MPV playlist to local library
|
|
||||||
.pipe -load # List saved playlists; use @N to load one
|
|
||||||
.pipe "https://www.youtube.com/watch?v=_23dFb50Z2Y" # Add URL to current playlist
|
|
||||||
```
|
|
||||||
|
|
||||||
Example pipelines:
|
|
||||||
|
|
||||||
1. **Simple download with metadata (tags and URL registration)**:
|
|
||||||
```
|
|
||||||
download-media "https://www.youtube.com/watch?v=dQw4w9WgXcQ" | add-file -storage local | add-url
|
|
||||||
```
|
|
||||||
|
|
||||||
2. **Download playlist item with tags**:
|
|
||||||
```
|
|
||||||
download-media "https://www.youtube.com/playlist?list=PLxxxxx" -item 2 | add-file -storage local | add-url
|
|
||||||
```
|
|
||||||
|
|
||||||
3. **Download with merge (e.g., Bandcamp albums)**:
|
|
||||||
```
|
|
||||||
download-data "https://altrusiangrace.bandcamp.com/album/ancient-egyptian-legends-full-audiobook" | merge-file | add-file -storage local | add-url
|
|
||||||
```
|
|
||||||
|
|
||||||
4. **Download direct file (PDF, document)**:
|
|
||||||
```
|
|
||||||
download-file "https://example.com/file.pdf" | add-file -storage local | add-url
|
|
||||||
```
|
|
||||||
|
|
||||||
Search examples:
|
|
||||||
|
|
||||||
1. search-file -provider youtube "something in the way"
|
|
||||||
|
|
||||||
2. @1
|
|
||||||
|
|
||||||
3. download-media [URL] | add-file -storage local | add-url
|
|
||||||
Reference in New Issue
Block a user